00:00:00.001 Started by upstream project "autotest-per-patch" build number 126166 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.070 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.114 Using shallow fetch with depth 1 00:00:00.114 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.114 > git --version # timeout=10 00:00:00.164 > git --version # 'git version 2.39.2' 00:00:00.164 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.059 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.070 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.084 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.084 > git config core.sparsecheckout # timeout=10 00:00:04.096 > git read-tree -mu HEAD # timeout=10 00:00:04.116 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.148 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.148 > git rev-list --no-walk 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 # timeout=10 00:00:04.244 [Pipeline] Start of Pipeline 00:00:04.259 [Pipeline] library 00:00:04.260 Loading library shm_lib@master 00:00:04.261 Library shm_lib@master is cached. Copying from home. 00:00:04.280 [Pipeline] node 00:00:04.288 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.291 [Pipeline] { 00:00:04.299 [Pipeline] catchError 00:00:04.301 [Pipeline] { 00:00:04.316 [Pipeline] wrap 00:00:04.328 [Pipeline] { 00:00:04.334 [Pipeline] stage 00:00:04.336 [Pipeline] { (Prologue) 00:00:04.560 [Pipeline] sh 00:00:04.853 + logger -p user.info -t JENKINS-CI 00:00:04.871 [Pipeline] echo 00:00:04.872 Node: CYP9 00:00:04.877 [Pipeline] sh 00:00:05.179 [Pipeline] setCustomBuildProperty 00:00:05.191 [Pipeline] echo 00:00:05.193 Cleanup processes 00:00:05.199 [Pipeline] sh 00:00:05.485 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.485 3207625 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.499 [Pipeline] sh 00:00:05.784 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.784 ++ grep -v 'sudo pgrep' 00:00:05.784 ++ awk '{print $1}' 00:00:05.784 + sudo kill -9 00:00:05.784 + true 00:00:05.800 [Pipeline] cleanWs 00:00:05.810 [WS-CLEANUP] Deleting project workspace... 00:00:05.810 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.818 [WS-CLEANUP] done 00:00:05.823 [Pipeline] setCustomBuildProperty 00:00:05.837 [Pipeline] sh 00:00:06.123 + sudo git config --global --replace-all safe.directory '*' 00:00:06.192 [Pipeline] httpRequest 00:00:06.219 [Pipeline] echo 00:00:06.220 Sorcerer 10.211.164.101 is alive 00:00:06.226 [Pipeline] httpRequest 00:00:06.232 HttpMethod: GET 00:00:06.232 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.233 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.250 Response Code: HTTP/1.1 200 OK 00:00:06.251 Success: Status code 200 is in the accepted range: 200,404 00:00:06.252 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.403 [Pipeline] sh 00:00:09.705 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.720 [Pipeline] httpRequest 00:00:09.747 [Pipeline] echo 00:00:09.749 Sorcerer 10.211.164.101 is alive 00:00:09.755 [Pipeline] httpRequest 00:00:09.760 HttpMethod: GET 00:00:09.761 URL: http://10.211.164.101/packages/spdk_3b4b1d00cd065a1adf69acd4bb0e6847b5a9d598.tar.gz 00:00:09.761 Sending request to url: http://10.211.164.101/packages/spdk_3b4b1d00cd065a1adf69acd4bb0e6847b5a9d598.tar.gz 00:00:09.768 Response Code: HTTP/1.1 200 OK 00:00:09.769 Success: Status code 200 is in the accepted range: 200,404 00:00:09.769 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3b4b1d00cd065a1adf69acd4bb0e6847b5a9d598.tar.gz 00:01:33.129 [Pipeline] sh 00:01:33.415 + tar --no-same-owner -xf spdk_3b4b1d00cd065a1adf69acd4bb0e6847b5a9d598.tar.gz 00:01:35.975 [Pipeline] sh 00:01:36.263 + git -C spdk log --oneline -n5 00:01:36.263 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:36.263 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:36.263 719d03c6a sock/uring: only register net impl if supported 00:01:36.263 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:36.263 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:36.277 [Pipeline] } 00:01:36.292 [Pipeline] // stage 00:01:36.301 [Pipeline] stage 00:01:36.303 [Pipeline] { (Prepare) 00:01:36.318 [Pipeline] writeFile 00:01:36.331 [Pipeline] sh 00:01:36.616 + logger -p user.info -t JENKINS-CI 00:01:36.629 [Pipeline] sh 00:01:36.915 + logger -p user.info -t JENKINS-CI 00:01:36.928 [Pipeline] sh 00:01:37.212 + cat autorun-spdk.conf 00:01:37.212 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.212 SPDK_TEST_NVMF=1 00:01:37.212 SPDK_TEST_NVME_CLI=1 00:01:37.212 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.212 SPDK_TEST_NVMF_NICS=e810 00:01:37.212 SPDK_TEST_VFIOUSER=1 00:01:37.212 SPDK_RUN_UBSAN=1 00:01:37.212 NET_TYPE=phy 00:01:37.220 RUN_NIGHTLY=0 00:01:37.223 [Pipeline] readFile 00:01:37.238 [Pipeline] withEnv 00:01:37.240 [Pipeline] { 00:01:37.248 [Pipeline] sh 00:01:37.530 + set -ex 00:01:37.530 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:37.530 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.530 ++ SPDK_TEST_NVMF=1 00:01:37.530 ++ SPDK_TEST_NVME_CLI=1 00:01:37.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.530 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.530 ++ SPDK_TEST_VFIOUSER=1 00:01:37.530 ++ SPDK_RUN_UBSAN=1 00:01:37.530 ++ NET_TYPE=phy 00:01:37.530 ++ RUN_NIGHTLY=0 00:01:37.530 + case $SPDK_TEST_NVMF_NICS in 00:01:37.530 + DRIVERS=ice 00:01:37.530 + [[ tcp == \r\d\m\a ]] 00:01:37.530 + [[ -n ice ]] 00:01:37.530 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:37.530 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:37.530 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:37.530 rmmod: ERROR: Module irdma is not currently loaded 00:01:37.530 rmmod: ERROR: Module i40iw is not currently loaded 00:01:37.530 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:37.530 + true 00:01:37.530 + for D in $DRIVERS 00:01:37.530 + sudo modprobe ice 00:01:37.530 + exit 0 00:01:37.540 [Pipeline] } 00:01:37.553 [Pipeline] // withEnv 00:01:37.557 [Pipeline] } 00:01:37.579 [Pipeline] // stage 00:01:37.590 [Pipeline] catchError 00:01:37.591 [Pipeline] { 00:01:37.605 [Pipeline] timeout 00:01:37.605 Timeout set to expire in 50 min 00:01:37.607 [Pipeline] { 00:01:37.620 [Pipeline] stage 00:01:37.622 [Pipeline] { (Tests) 00:01:37.634 [Pipeline] sh 00:01:37.922 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.922 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.922 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.922 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:37.922 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.922 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:37.922 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:37.922 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:37.922 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:37.922 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:37.922 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:37.922 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:37.922 + source /etc/os-release 00:01:37.922 ++ NAME='Fedora Linux' 00:01:37.922 ++ VERSION='38 (Cloud Edition)' 00:01:37.922 ++ ID=fedora 00:01:37.922 ++ VERSION_ID=38 00:01:37.922 ++ VERSION_CODENAME= 00:01:37.922 ++ PLATFORM_ID=platform:f38 00:01:37.922 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:37.922 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.922 ++ LOGO=fedora-logo-icon 00:01:37.922 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:37.922 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.922 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:37.922 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.922 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.922 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.922 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:37.922 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.922 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:37.922 ++ SUPPORT_END=2024-05-14 00:01:37.922 ++ VARIANT='Cloud Edition' 00:01:37.922 ++ VARIANT_ID=cloud 00:01:37.922 + uname -a 00:01:37.922 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:37.922 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:40.489 Hugepages 00:01:40.489 node hugesize free / total 00:01:40.489 node0 1048576kB 0 / 0 00:01:40.489 node0 2048kB 0 / 0 00:01:40.489 node1 1048576kB 0 / 0 00:01:40.489 node1 2048kB 0 / 0 00:01:40.489 00:01:40.489 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.489 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:40.489 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:40.489 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:40.489 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:40.489 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:40.489 + rm -f /tmp/spdk-ld-path 00:01:40.489 + source autorun-spdk.conf 00:01:40.489 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.489 ++ SPDK_TEST_NVMF=1 00:01:40.489 ++ SPDK_TEST_NVME_CLI=1 00:01:40.489 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.489 ++ SPDK_TEST_NVMF_NICS=e810 00:01:40.489 ++ SPDK_TEST_VFIOUSER=1 00:01:40.489 ++ SPDK_RUN_UBSAN=1 00:01:40.489 ++ NET_TYPE=phy 00:01:40.489 ++ RUN_NIGHTLY=0 00:01:40.489 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.489 + [[ -n '' ]] 00:01:40.489 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.489 + for M in /var/spdk/build-*-manifest.txt 00:01:40.489 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.489 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:40.489 + for M in /var/spdk/build-*-manifest.txt 00:01:40.489 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.489 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:40.489 ++ uname 00:01:40.489 + [[ Linux == \L\i\n\u\x ]] 00:01:40.489 + sudo dmesg -T 00:01:40.489 + sudo dmesg --clear 00:01:40.489 + dmesg_pid=3209170 00:01:40.489 + [[ Fedora Linux == FreeBSD ]] 00:01:40.489 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.489 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.489 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.489 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:40.489 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:40.489 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.489 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.489 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.489 + sudo dmesg -Tw 00:01:40.489 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.489 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.489 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.489 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.489 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.489 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.489 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.489 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.489 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:40.489 Test configuration: 00:01:40.489 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.489 SPDK_TEST_NVMF=1 00:01:40.489 SPDK_TEST_NVME_CLI=1 00:01:40.489 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.489 SPDK_TEST_NVMF_NICS=e810 00:01:40.489 SPDK_TEST_VFIOUSER=1 00:01:40.489 SPDK_RUN_UBSAN=1 00:01:40.489 NET_TYPE=phy 00:01:40.752 RUN_NIGHTLY=0 11:13:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:40.752 11:13:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.752 11:13:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.752 11:13:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.752 11:13:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.752 11:13:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.752 11:13:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.752 11:13:09 -- paths/export.sh@5 -- $ export PATH 00:01:40.752 11:13:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.752 11:13:09 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:40.752 11:13:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:40.752 11:13:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721034789.XXXXXX 00:01:40.752 11:13:09 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721034789.Y2IYyi 00:01:40.752 11:13:09 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:40.752 11:13:09 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:40.752 11:13:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:40.752 11:13:09 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:40.752 11:13:09 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.752 11:13:09 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:40.752 11:13:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:40.752 11:13:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.752 11:13:09 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:40.752 11:13:09 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:40.752 11:13:09 -- pm/common@17 -- $ local monitor 00:01:40.752 11:13:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.752 11:13:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.752 11:13:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.752 11:13:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.752 11:13:09 -- pm/common@21 -- $ date +%s 00:01:40.752 11:13:09 -- pm/common@25 -- $ sleep 1 00:01:40.752 11:13:09 -- pm/common@21 -- $ date +%s 00:01:40.752 11:13:09 -- pm/common@21 -- $ date +%s 00:01:40.752 11:13:09 -- pm/common@21 -- $ date +%s 00:01:40.752 11:13:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034789 00:01:40.752 11:13:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034789 00:01:40.752 11:13:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034789 00:01:40.752 11:13:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721034789 00:01:40.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034789_collect-vmstat.pm.log 00:01:40.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034789_collect-cpu-load.pm.log 00:01:40.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034789_collect-cpu-temp.pm.log 00:01:40.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721034789_collect-bmc-pm.bmc.pm.log 00:01:41.697 11:13:10 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:41.697 11:13:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.697 11:13:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.697 11:13:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.697 11:13:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.697 Mon Jul 15 09:13:10 AM UTC 2024 00:01:41.697 11:13:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.697 v24.09-pre-204-g3b4b1d00c 00:01:41.697 11:13:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.697 11:13:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.697 11:13:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.697 11:13:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.697 11:13:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.697 11:13:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.697 ************************************ 00:01:41.697 START TEST ubsan 00:01:41.697 ************************************ 00:01:41.697 11:13:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:41.697 using ubsan 00:01:41.697 00:01:41.697 real 0m0.001s 00:01:41.697 user 0m0.000s 00:01:41.697 sys 0m0.000s 00:01:41.697 11:13:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:41.697 11:13:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.697 ************************************ 00:01:41.697 END TEST ubsan 00:01:41.697 ************************************ 00:01:41.697 11:13:10 -- common/autotest_common.sh@1142 -- $ return 0 00:01:41.697 11:13:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.697 11:13:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.697 11:13:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.697 11:13:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.697 11:13:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.697 11:13:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.697 11:13:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.697 11:13:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.697 11:13:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:41.958 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:41.958 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:42.219 Using 'verbs' RDMA provider 00:01:58.074 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:10.310 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:10.310 Creating mk/config.mk...done. 00:02:10.310 Creating mk/cc.flags.mk...done. 00:02:10.310 Type 'make' to build. 00:02:10.310 11:13:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:10.310 11:13:38 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:10.310 11:13:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.310 11:13:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.310 ************************************ 00:02:10.310 START TEST make 00:02:10.310 ************************************ 00:02:10.310 11:13:38 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:10.310 make[1]: Nothing to be done for 'all'. 00:02:11.251 The Meson build system 00:02:11.251 Version: 1.3.1 00:02:11.251 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:11.251 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.251 Build type: native build 00:02:11.251 Project name: libvfio-user 00:02:11.251 Project version: 0.0.1 00:02:11.251 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:11.251 C linker for the host machine: cc ld.bfd 2.39-16 00:02:11.251 Host machine cpu family: x86_64 00:02:11.251 Host machine cpu: x86_64 00:02:11.251 Run-time dependency threads found: YES 00:02:11.251 Library dl found: YES 00:02:11.251 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:11.251 Run-time dependency json-c found: YES 0.17 00:02:11.251 Run-time dependency cmocka found: YES 1.1.7 00:02:11.251 Program pytest-3 found: NO 00:02:11.251 Program flake8 found: NO 00:02:11.251 Program misspell-fixer found: NO 00:02:11.251 Program restructuredtext-lint found: NO 00:02:11.251 Program valgrind found: YES (/usr/bin/valgrind) 00:02:11.251 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.251 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.251 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.251 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.251 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:11.251 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:11.251 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.251 Build targets in project: 8 00:02:11.251 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:11.251 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:11.251 00:02:11.251 libvfio-user 0.0.1 00:02:11.251 00:02:11.251 User defined options 00:02:11.251 buildtype : debug 00:02:11.251 default_library: shared 00:02:11.251 libdir : /usr/local/lib 00:02:11.251 00:02:11.251 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.509 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:11.766 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:11.766 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:11.766 [3/37] Compiling C object samples/null.p/null.c.o 00:02:11.766 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:11.766 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:11.766 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:11.766 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:11.766 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:11.766 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:11.766 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:11.766 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:11.766 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:11.766 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:11.766 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:11.766 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:11.766 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:11.766 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:11.766 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:11.766 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:11.766 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:11.766 [21/37] Compiling C object samples/server.p/server.c.o 00:02:11.766 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:11.766 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:11.766 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:11.766 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:11.766 [26/37] Compiling C object samples/client.p/client.c.o 00:02:11.766 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:11.766 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:11.766 [29/37] Linking target samples/client 00:02:11.766 [30/37] Linking target test/unit_tests 00:02:11.766 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:12.024 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:12.024 [33/37] Linking target samples/lspci 00:02:12.024 [34/37] Linking target samples/server 00:02:12.024 [35/37] Linking target samples/gpio-pci-idio-16 00:02:12.024 [36/37] Linking target samples/null 00:02:12.024 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:12.024 INFO: autodetecting backend as ninja 00:02:12.024 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:12.024 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:12.284 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:12.284 ninja: no work to do. 00:02:18.872 The Meson build system 00:02:18.872 Version: 1.3.1 00:02:18.872 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:18.872 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:18.872 Build type: native build 00:02:18.872 Program cat found: YES (/usr/bin/cat) 00:02:18.872 Project name: DPDK 00:02:18.872 Project version: 24.03.0 00:02:18.872 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:18.872 C linker for the host machine: cc ld.bfd 2.39-16 00:02:18.872 Host machine cpu family: x86_64 00:02:18.872 Host machine cpu: x86_64 00:02:18.872 Message: ## Building in Developer Mode ## 00:02:18.872 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:18.872 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:18.872 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.872 Program python3 found: YES (/usr/bin/python3) 00:02:18.872 Program cat found: YES (/usr/bin/cat) 00:02:18.872 Compiler for C supports arguments -march=native: YES 00:02:18.872 Checking for size of "void *" : 8 00:02:18.872 Checking for size of "void *" : 8 (cached) 00:02:18.872 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:18.872 Library m found: YES 00:02:18.872 Library numa found: YES 00:02:18.872 Has header "numaif.h" : YES 00:02:18.872 Library fdt found: NO 00:02:18.872 Library execinfo found: NO 00:02:18.872 Has header "execinfo.h" : YES 00:02:18.872 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:18.872 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.872 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.872 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.872 Run-time dependency openssl found: YES 3.0.9 00:02:18.872 Run-time dependency libpcap found: YES 1.10.4 00:02:18.872 Has header "pcap.h" with dependency libpcap: YES 00:02:18.872 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.872 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.872 Compiler for C supports arguments -Wformat: YES 00:02:18.872 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.872 Compiler for C supports arguments -Wformat-security: NO 00:02:18.872 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.872 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.872 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.872 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.872 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.872 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.872 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.872 Compiler for C supports arguments -Wundef: YES 00:02:18.872 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.873 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.873 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.873 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.873 Program objdump found: YES (/usr/bin/objdump) 00:02:18.873 Compiler for C supports arguments -mavx512f: YES 00:02:18.873 Checking if "AVX512 checking" compiles: YES 00:02:18.873 Fetching value of define "__SSE4_2__" : 1 00:02:18.873 Fetching value of define "__AES__" : 1 00:02:18.873 Fetching value of define "__AVX__" : 1 00:02:18.873 Fetching value of define "__AVX2__" : 1 00:02:18.873 Fetching value of define "__AVX512BW__" : 1 00:02:18.873 Fetching value of define "__AVX512CD__" : 1 00:02:18.873 Fetching value of define "__AVX512DQ__" : 1 00:02:18.873 Fetching value of define "__AVX512F__" : 1 00:02:18.873 Fetching value of define "__AVX512VL__" : 1 00:02:18.873 Fetching value of define "__PCLMUL__" : 1 00:02:18.873 Fetching value of define "__RDRND__" : 1 00:02:18.873 Fetching value of define "__RDSEED__" : 1 00:02:18.873 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:18.873 Fetching value of define "__znver1__" : (undefined) 00:02:18.873 Fetching value of define "__znver2__" : (undefined) 00:02:18.873 Fetching value of define "__znver3__" : (undefined) 00:02:18.873 Fetching value of define "__znver4__" : (undefined) 00:02:18.873 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.873 Message: lib/log: Defining dependency "log" 00:02:18.873 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.873 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.873 Checking for function "getentropy" : NO 00:02:18.873 Message: lib/eal: Defining dependency "eal" 00:02:18.873 Message: lib/ring: Defining dependency "ring" 00:02:18.873 Message: lib/rcu: Defining dependency "rcu" 00:02:18.873 Message: lib/mempool: Defining dependency "mempool" 00:02:18.873 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.873 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.873 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.873 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:18.873 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:18.873 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:18.873 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:18.873 Compiler for C supports arguments -mpclmul: YES 00:02:18.873 Compiler for C supports arguments -maes: YES 00:02:18.873 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.873 Compiler for C supports arguments -mavx512bw: YES 00:02:18.873 Compiler for C supports arguments -mavx512dq: YES 00:02:18.873 Compiler for C supports arguments -mavx512vl: YES 00:02:18.873 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.873 Compiler for C supports arguments -mavx2: YES 00:02:18.873 Compiler for C supports arguments -mavx: YES 00:02:18.873 Message: lib/net: Defining dependency "net" 00:02:18.873 Message: lib/meter: Defining dependency "meter" 00:02:18.873 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.873 Message: lib/pci: Defining dependency "pci" 00:02:18.873 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.873 Message: lib/hash: Defining dependency "hash" 00:02:18.873 Message: lib/timer: Defining dependency "timer" 00:02:18.873 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.873 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.873 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.873 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.873 Message: lib/power: Defining dependency "power" 00:02:18.873 Message: lib/reorder: Defining dependency "reorder" 00:02:18.873 Message: lib/security: Defining dependency "security" 00:02:18.873 Has header "linux/userfaultfd.h" : YES 00:02:18.873 Has header "linux/vduse.h" : YES 00:02:18.873 Message: lib/vhost: Defining dependency "vhost" 00:02:18.873 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.873 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.873 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.873 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.873 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:18.873 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:18.873 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:18.873 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:18.873 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:18.873 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:18.873 Program doxygen found: YES (/usr/bin/doxygen) 00:02:18.873 Configuring doxy-api-html.conf using configuration 00:02:18.873 Configuring doxy-api-man.conf using configuration 00:02:18.873 Program mandb found: YES (/usr/bin/mandb) 00:02:18.873 Program sphinx-build found: NO 00:02:18.873 Configuring rte_build_config.h using configuration 00:02:18.873 Message: 00:02:18.873 ================= 00:02:18.873 Applications Enabled 00:02:18.873 ================= 00:02:18.873 00:02:18.873 apps: 00:02:18.873 00:02:18.873 00:02:18.873 Message: 00:02:18.873 ================= 00:02:18.873 Libraries Enabled 00:02:18.873 ================= 00:02:18.873 00:02:18.873 libs: 00:02:18.873 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:18.873 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:18.873 cryptodev, dmadev, power, reorder, security, vhost, 00:02:18.873 00:02:18.873 Message: 00:02:18.873 =============== 00:02:18.873 Drivers Enabled 00:02:18.873 =============== 00:02:18.873 00:02:18.873 common: 00:02:18.873 00:02:18.873 bus: 00:02:18.873 pci, vdev, 00:02:18.873 mempool: 00:02:18.873 ring, 00:02:18.873 dma: 00:02:18.873 00:02:18.873 net: 00:02:18.873 00:02:18.873 crypto: 00:02:18.873 00:02:18.873 compress: 00:02:18.873 00:02:18.873 vdpa: 00:02:18.873 00:02:18.873 00:02:18.873 Message: 00:02:18.873 ================= 00:02:18.873 Content Skipped 00:02:18.873 ================= 00:02:18.873 00:02:18.873 apps: 00:02:18.873 dumpcap: explicitly disabled via build config 00:02:18.873 graph: explicitly disabled via build config 00:02:18.873 pdump: explicitly disabled via build config 00:02:18.873 proc-info: explicitly disabled via build config 00:02:18.873 test-acl: explicitly disabled via build config 00:02:18.873 test-bbdev: explicitly disabled via build config 00:02:18.873 test-cmdline: explicitly disabled via build config 00:02:18.873 test-compress-perf: explicitly disabled via build config 00:02:18.873 test-crypto-perf: explicitly disabled via build config 00:02:18.873 test-dma-perf: explicitly disabled via build config 00:02:18.873 test-eventdev: explicitly disabled via build config 00:02:18.873 test-fib: explicitly disabled via build config 00:02:18.873 test-flow-perf: explicitly disabled via build config 00:02:18.873 test-gpudev: explicitly disabled via build config 00:02:18.873 test-mldev: explicitly disabled via build config 00:02:18.873 test-pipeline: explicitly disabled via build config 00:02:18.873 test-pmd: explicitly disabled via build config 00:02:18.873 test-regex: explicitly disabled via build config 00:02:18.873 test-sad: explicitly disabled via build config 00:02:18.873 test-security-perf: explicitly disabled via build config 00:02:18.873 00:02:18.873 libs: 00:02:18.873 argparse: explicitly disabled via build config 00:02:18.873 metrics: explicitly disabled via build config 00:02:18.873 acl: explicitly disabled via build config 00:02:18.873 bbdev: explicitly disabled via build config 00:02:18.873 bitratestats: explicitly disabled via build config 00:02:18.873 bpf: explicitly disabled via build config 00:02:18.873 cfgfile: explicitly disabled via build config 00:02:18.873 distributor: explicitly disabled via build config 00:02:18.873 efd: explicitly disabled via build config 00:02:18.873 eventdev: explicitly disabled via build config 00:02:18.873 dispatcher: explicitly disabled via build config 00:02:18.873 gpudev: explicitly disabled via build config 00:02:18.873 gro: explicitly disabled via build config 00:02:18.873 gso: explicitly disabled via build config 00:02:18.873 ip_frag: explicitly disabled via build config 00:02:18.873 jobstats: explicitly disabled via build config 00:02:18.873 latencystats: explicitly disabled via build config 00:02:18.873 lpm: explicitly disabled via build config 00:02:18.873 member: explicitly disabled via build config 00:02:18.873 pcapng: explicitly disabled via build config 00:02:18.873 rawdev: explicitly disabled via build config 00:02:18.873 regexdev: explicitly disabled via build config 00:02:18.873 mldev: explicitly disabled via build config 00:02:18.873 rib: explicitly disabled via build config 00:02:18.873 sched: explicitly disabled via build config 00:02:18.873 stack: explicitly disabled via build config 00:02:18.873 ipsec: explicitly disabled via build config 00:02:18.873 pdcp: explicitly disabled via build config 00:02:18.873 fib: explicitly disabled via build config 00:02:18.873 port: explicitly disabled via build config 00:02:18.873 pdump: explicitly disabled via build config 00:02:18.873 table: explicitly disabled via build config 00:02:18.873 pipeline: explicitly disabled via build config 00:02:18.873 graph: explicitly disabled via build config 00:02:18.873 node: explicitly disabled via build config 00:02:18.873 00:02:18.873 drivers: 00:02:18.873 common/cpt: not in enabled drivers build config 00:02:18.873 common/dpaax: not in enabled drivers build config 00:02:18.873 common/iavf: not in enabled drivers build config 00:02:18.873 common/idpf: not in enabled drivers build config 00:02:18.873 common/ionic: not in enabled drivers build config 00:02:18.873 common/mvep: not in enabled drivers build config 00:02:18.873 common/octeontx: not in enabled drivers build config 00:02:18.873 bus/auxiliary: not in enabled drivers build config 00:02:18.873 bus/cdx: not in enabled drivers build config 00:02:18.873 bus/dpaa: not in enabled drivers build config 00:02:18.873 bus/fslmc: not in enabled drivers build config 00:02:18.873 bus/ifpga: not in enabled drivers build config 00:02:18.873 bus/platform: not in enabled drivers build config 00:02:18.873 bus/uacce: not in enabled drivers build config 00:02:18.873 bus/vmbus: not in enabled drivers build config 00:02:18.873 common/cnxk: not in enabled drivers build config 00:02:18.873 common/mlx5: not in enabled drivers build config 00:02:18.873 common/nfp: not in enabled drivers build config 00:02:18.873 common/nitrox: not in enabled drivers build config 00:02:18.873 common/qat: not in enabled drivers build config 00:02:18.873 common/sfc_efx: not in enabled drivers build config 00:02:18.873 mempool/bucket: not in enabled drivers build config 00:02:18.874 mempool/cnxk: not in enabled drivers build config 00:02:18.874 mempool/dpaa: not in enabled drivers build config 00:02:18.874 mempool/dpaa2: not in enabled drivers build config 00:02:18.874 mempool/octeontx: not in enabled drivers build config 00:02:18.874 mempool/stack: not in enabled drivers build config 00:02:18.874 dma/cnxk: not in enabled drivers build config 00:02:18.874 dma/dpaa: not in enabled drivers build config 00:02:18.874 dma/dpaa2: not in enabled drivers build config 00:02:18.874 dma/hisilicon: not in enabled drivers build config 00:02:18.874 dma/idxd: not in enabled drivers build config 00:02:18.874 dma/ioat: not in enabled drivers build config 00:02:18.874 dma/skeleton: not in enabled drivers build config 00:02:18.874 net/af_packet: not in enabled drivers build config 00:02:18.874 net/af_xdp: not in enabled drivers build config 00:02:18.874 net/ark: not in enabled drivers build config 00:02:18.874 net/atlantic: not in enabled drivers build config 00:02:18.874 net/avp: not in enabled drivers build config 00:02:18.874 net/axgbe: not in enabled drivers build config 00:02:18.874 net/bnx2x: not in enabled drivers build config 00:02:18.874 net/bnxt: not in enabled drivers build config 00:02:18.874 net/bonding: not in enabled drivers build config 00:02:18.874 net/cnxk: not in enabled drivers build config 00:02:18.874 net/cpfl: not in enabled drivers build config 00:02:18.874 net/cxgbe: not in enabled drivers build config 00:02:18.874 net/dpaa: not in enabled drivers build config 00:02:18.874 net/dpaa2: not in enabled drivers build config 00:02:18.874 net/e1000: not in enabled drivers build config 00:02:18.874 net/ena: not in enabled drivers build config 00:02:18.874 net/enetc: not in enabled drivers build config 00:02:18.874 net/enetfec: not in enabled drivers build config 00:02:18.874 net/enic: not in enabled drivers build config 00:02:18.874 net/failsafe: not in enabled drivers build config 00:02:18.874 net/fm10k: not in enabled drivers build config 00:02:18.874 net/gve: not in enabled drivers build config 00:02:18.874 net/hinic: not in enabled drivers build config 00:02:18.874 net/hns3: not in enabled drivers build config 00:02:18.874 net/i40e: not in enabled drivers build config 00:02:18.874 net/iavf: not in enabled drivers build config 00:02:18.874 net/ice: not in enabled drivers build config 00:02:18.874 net/idpf: not in enabled drivers build config 00:02:18.874 net/igc: not in enabled drivers build config 00:02:18.874 net/ionic: not in enabled drivers build config 00:02:18.874 net/ipn3ke: not in enabled drivers build config 00:02:18.874 net/ixgbe: not in enabled drivers build config 00:02:18.874 net/mana: not in enabled drivers build config 00:02:18.874 net/memif: not in enabled drivers build config 00:02:18.874 net/mlx4: not in enabled drivers build config 00:02:18.874 net/mlx5: not in enabled drivers build config 00:02:18.874 net/mvneta: not in enabled drivers build config 00:02:18.874 net/mvpp2: not in enabled drivers build config 00:02:18.874 net/netvsc: not in enabled drivers build config 00:02:18.874 net/nfb: not in enabled drivers build config 00:02:18.874 net/nfp: not in enabled drivers build config 00:02:18.874 net/ngbe: not in enabled drivers build config 00:02:18.874 net/null: not in enabled drivers build config 00:02:18.874 net/octeontx: not in enabled drivers build config 00:02:18.874 net/octeon_ep: not in enabled drivers build config 00:02:18.874 net/pcap: not in enabled drivers build config 00:02:18.874 net/pfe: not in enabled drivers build config 00:02:18.874 net/qede: not in enabled drivers build config 00:02:18.874 net/ring: not in enabled drivers build config 00:02:18.874 net/sfc: not in enabled drivers build config 00:02:18.874 net/softnic: not in enabled drivers build config 00:02:18.874 net/tap: not in enabled drivers build config 00:02:18.874 net/thunderx: not in enabled drivers build config 00:02:18.874 net/txgbe: not in enabled drivers build config 00:02:18.874 net/vdev_netvsc: not in enabled drivers build config 00:02:18.874 net/vhost: not in enabled drivers build config 00:02:18.874 net/virtio: not in enabled drivers build config 00:02:18.874 net/vmxnet3: not in enabled drivers build config 00:02:18.874 raw/*: missing internal dependency, "rawdev" 00:02:18.874 crypto/armv8: not in enabled drivers build config 00:02:18.874 crypto/bcmfs: not in enabled drivers build config 00:02:18.874 crypto/caam_jr: not in enabled drivers build config 00:02:18.874 crypto/ccp: not in enabled drivers build config 00:02:18.874 crypto/cnxk: not in enabled drivers build config 00:02:18.874 crypto/dpaa_sec: not in enabled drivers build config 00:02:18.874 crypto/dpaa2_sec: not in enabled drivers build config 00:02:18.874 crypto/ipsec_mb: not in enabled drivers build config 00:02:18.874 crypto/mlx5: not in enabled drivers build config 00:02:18.874 crypto/mvsam: not in enabled drivers build config 00:02:18.874 crypto/nitrox: not in enabled drivers build config 00:02:18.874 crypto/null: not in enabled drivers build config 00:02:18.874 crypto/octeontx: not in enabled drivers build config 00:02:18.874 crypto/openssl: not in enabled drivers build config 00:02:18.874 crypto/scheduler: not in enabled drivers build config 00:02:18.874 crypto/uadk: not in enabled drivers build config 00:02:18.874 crypto/virtio: not in enabled drivers build config 00:02:18.874 compress/isal: not in enabled drivers build config 00:02:18.874 compress/mlx5: not in enabled drivers build config 00:02:18.874 compress/nitrox: not in enabled drivers build config 00:02:18.874 compress/octeontx: not in enabled drivers build config 00:02:18.874 compress/zlib: not in enabled drivers build config 00:02:18.874 regex/*: missing internal dependency, "regexdev" 00:02:18.874 ml/*: missing internal dependency, "mldev" 00:02:18.874 vdpa/ifc: not in enabled drivers build config 00:02:18.874 vdpa/mlx5: not in enabled drivers build config 00:02:18.874 vdpa/nfp: not in enabled drivers build config 00:02:18.874 vdpa/sfc: not in enabled drivers build config 00:02:18.874 event/*: missing internal dependency, "eventdev" 00:02:18.874 baseband/*: missing internal dependency, "bbdev" 00:02:18.874 gpu/*: missing internal dependency, "gpudev" 00:02:18.874 00:02:18.874 00:02:18.874 Build targets in project: 84 00:02:18.874 00:02:18.874 DPDK 24.03.0 00:02:18.874 00:02:18.874 User defined options 00:02:18.874 buildtype : debug 00:02:18.874 default_library : shared 00:02:18.874 libdir : lib 00:02:18.874 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:18.874 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:18.874 c_link_args : 00:02:18.874 cpu_instruction_set: native 00:02:18.874 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:18.874 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:18.874 enable_docs : false 00:02:18.874 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:18.874 enable_kmods : false 00:02:18.874 max_lcores : 128 00:02:18.874 tests : false 00:02:18.874 00:02:18.874 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.874 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:19.141 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.141 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.141 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.141 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.141 [5/267] Linking static target lib/librte_kvargs.a 00:02:19.141 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.141 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.141 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.141 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.141 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.141 [11/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.141 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:19.141 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.141 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.141 [15/267] Linking static target lib/librte_log.a 00:02:19.141 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.141 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.141 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.399 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.399 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.399 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.400 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.400 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.400 [24/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:19.400 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.400 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.400 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.400 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.400 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.400 [30/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.400 [31/267] Linking static target lib/librte_pci.a 00:02:19.400 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.400 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.400 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.400 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.400 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.658 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.658 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.658 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.658 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.658 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.658 [42/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.658 [43/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.658 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.658 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.658 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.658 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.658 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.658 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.658 [50/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.658 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.658 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.658 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.658 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.658 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.658 [56/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.658 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.658 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.658 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.658 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.658 [61/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.658 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.658 [63/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.658 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.658 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.658 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.658 [67/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.658 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.658 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.658 [70/267] Linking static target lib/librte_meter.a 00:02:19.658 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.658 [72/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.658 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.658 [74/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.658 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.658 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.658 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.658 [78/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.658 [79/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.658 [80/267] Linking static target lib/librte_telemetry.a 00:02:19.658 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.658 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.658 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.658 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.658 [85/267] Linking static target lib/librte_ring.a 00:02:19.659 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.659 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.659 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.659 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.659 [90/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.659 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.659 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.659 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.659 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.919 [95/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.919 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.919 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.919 [98/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.919 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.919 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.919 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.919 [102/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.919 [103/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.919 [104/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.919 [105/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.919 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.919 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.919 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.919 [109/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.919 [110/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.919 [111/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.919 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.919 [113/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.919 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.919 [115/267] Linking static target lib/librte_timer.a 00:02:19.919 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.919 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.919 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.919 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.919 [120/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:19.919 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.919 [122/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:19.919 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.919 [124/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.919 [125/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.919 [126/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.919 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:19.919 [128/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.919 [129/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.919 [130/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.919 [131/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.919 [132/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.919 [133/267] Linking static target lib/librte_mempool.a 00:02:19.919 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.919 [135/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.919 [136/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.919 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.919 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.919 [139/267] Linking static target lib/librte_rcu.a 00:02:19.919 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.919 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.919 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.919 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.919 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.919 [145/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.919 [146/267] Linking target lib/librte_log.so.24.1 00:02:19.919 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:19.919 [148/267] Linking static target lib/librte_cmdline.a 00:02:19.919 [149/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.919 [150/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.919 [151/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.919 [152/267] Linking static target lib/librte_mbuf.a 00:02:19.919 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.919 [154/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.919 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:19.919 [156/267] Linking static target lib/librte_compressdev.a 00:02:19.919 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:19.919 [158/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.919 [159/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:19.919 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.919 [161/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.919 [162/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:19.919 [163/267] Linking static target lib/librte_power.a 00:02:19.919 [164/267] Linking static target lib/librte_dmadev.a 00:02:19.919 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.919 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:19.919 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:19.919 [168/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.919 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.919 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.919 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.919 [172/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:19.919 [173/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.919 [174/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:19.919 [175/267] Linking static target lib/librte_net.a 00:02:19.919 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.919 [177/267] Linking static target lib/librte_reorder.a 00:02:19.919 [178/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:19.919 [179/267] Linking static target lib/librte_security.a 00:02:20.219 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.219 [181/267] Linking static target lib/librte_eal.a 00:02:20.219 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.219 [183/267] Linking target lib/librte_kvargs.so.24.1 00:02:20.219 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.219 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.219 [186/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.219 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.219 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.219 [189/267] Linking static target lib/librte_hash.a 00:02:20.219 [190/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.219 [191/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.219 [192/267] Linking static target drivers/librte_bus_vdev.a 00:02:20.219 [193/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.219 [194/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:20.219 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.219 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.219 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.219 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.219 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.219 [200/267] Linking static target drivers/librte_mempool_ring.a 00:02:20.219 [201/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.219 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.219 [203/267] Linking static target lib/librte_cryptodev.a 00:02:20.220 [204/267] Linking static target drivers/librte_bus_pci.a 00:02:20.220 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.220 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.220 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.492 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.492 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.492 [210/267] Linking target lib/librte_telemetry.so.24.1 00:02:20.492 [211/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.492 [212/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:20.492 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.492 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:20.492 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.752 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.752 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.752 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.752 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.752 [220/267] Linking static target lib/librte_ethdev.a 00:02:20.752 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.752 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.013 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.013 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.013 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.275 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.535 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.535 [228/267] Linking static target lib/librte_vhost.a 00:02:22.480 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.868 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.458 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.844 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.844 [233/267] Linking target lib/librte_eal.so.24.1 00:02:31.844 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:31.844 [235/267] Linking target lib/librte_pci.so.24.1 00:02:31.844 [236/267] Linking target lib/librte_ring.so.24.1 00:02:31.844 [237/267] Linking target lib/librte_dmadev.so.24.1 00:02:31.844 [238/267] Linking target lib/librte_meter.so.24.1 00:02:31.844 [239/267] Linking target lib/librte_timer.so.24.1 00:02:31.844 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.105 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.105 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.105 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.105 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.105 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.105 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.105 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:32.105 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:32.105 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:32.366 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:32.366 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:32.366 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:32.366 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:32.366 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:32.366 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:32.366 [256/267] Linking target lib/librte_net.so.24.1 00:02:32.366 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:32.626 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:32.626 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:32.626 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:32.626 [261/267] Linking target lib/librte_hash.so.24.1 00:02:32.626 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:32.626 [263/267] Linking target lib/librte_security.so.24.1 00:02:32.888 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.888 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.888 [266/267] Linking target lib/librte_power.so.24.1 00:02:32.888 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:32.888 INFO: autodetecting backend as ninja 00:02:32.888 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:34.273 CC lib/ut/ut.o 00:02:34.273 CC lib/log/log.o 00:02:34.273 CC lib/log/log_flags.o 00:02:34.273 CC lib/log/log_deprecated.o 00:02:34.273 CC lib/ut_mock/mock.o 00:02:34.273 LIB libspdk_ut.a 00:02:34.273 SO libspdk_ut.so.2.0 00:02:34.273 LIB libspdk_log.a 00:02:34.273 LIB libspdk_ut_mock.a 00:02:34.273 SO libspdk_ut_mock.so.6.0 00:02:34.273 SO libspdk_log.so.7.0 00:02:34.273 SYMLINK libspdk_ut.so 00:02:34.273 SYMLINK libspdk_ut_mock.so 00:02:34.273 SYMLINK libspdk_log.so 00:02:34.533 CC lib/util/base64.o 00:02:34.533 CC lib/util/bit_array.o 00:02:34.533 CC lib/util/cpuset.o 00:02:34.533 CC lib/util/crc16.o 00:02:34.794 CC lib/util/crc32.o 00:02:34.794 CC lib/util/crc32c.o 00:02:34.794 CC lib/util/dif.o 00:02:34.794 CC lib/util/crc32_ieee.o 00:02:34.794 CC lib/util/crc64.o 00:02:34.794 CC lib/util/fd.o 00:02:34.794 CC lib/util/file.o 00:02:34.794 CC lib/util/hexlify.o 00:02:34.794 CC lib/dma/dma.o 00:02:34.794 CC lib/util/iov.o 00:02:34.794 CC lib/util/math.o 00:02:34.794 CC lib/util/pipe.o 00:02:34.794 CC lib/util/strerror_tls.o 00:02:34.794 CC lib/util/string.o 00:02:34.794 CXX lib/trace_parser/trace.o 00:02:34.794 CC lib/util/uuid.o 00:02:34.794 CC lib/util/fd_group.o 00:02:34.794 CC lib/util/xor.o 00:02:34.794 CC lib/ioat/ioat.o 00:02:34.794 CC lib/util/zipf.o 00:02:34.794 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.794 CC lib/vfio_user/host/vfio_user.o 00:02:34.794 LIB libspdk_dma.a 00:02:35.055 SO libspdk_dma.so.4.0 00:02:35.055 LIB libspdk_ioat.a 00:02:35.055 SYMLINK libspdk_dma.so 00:02:35.055 SO libspdk_ioat.so.7.0 00:02:35.055 SYMLINK libspdk_ioat.so 00:02:35.055 LIB libspdk_vfio_user.a 00:02:35.055 SO libspdk_vfio_user.so.5.0 00:02:35.055 LIB libspdk_util.a 00:02:35.316 SYMLINK libspdk_vfio_user.so 00:02:35.316 SO libspdk_util.so.9.1 00:02:35.316 SYMLINK libspdk_util.so 00:02:35.577 LIB libspdk_trace_parser.a 00:02:35.577 SO libspdk_trace_parser.so.5.0 00:02:35.577 SYMLINK libspdk_trace_parser.so 00:02:35.577 CC lib/rdma_provider/common.o 00:02:35.577 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.839 CC lib/json/json_parse.o 00:02:35.839 CC lib/json/json_util.o 00:02:35.839 CC lib/json/json_write.o 00:02:35.839 CC lib/env_dpdk/env.o 00:02:35.839 CC lib/env_dpdk/memory.o 00:02:35.839 CC lib/env_dpdk/pci.o 00:02:35.839 CC lib/env_dpdk/init.o 00:02:35.839 CC lib/conf/conf.o 00:02:35.839 CC lib/env_dpdk/threads.o 00:02:35.839 CC lib/env_dpdk/pci_ioat.o 00:02:35.839 CC lib/env_dpdk/pci_virtio.o 00:02:35.839 CC lib/env_dpdk/pci_vmd.o 00:02:35.839 CC lib/env_dpdk/pci_idxd.o 00:02:35.839 CC lib/idxd/idxd.o 00:02:35.839 CC lib/env_dpdk/pci_event.o 00:02:35.839 CC lib/rdma_utils/rdma_utils.o 00:02:35.839 CC lib/env_dpdk/sigbus_handler.o 00:02:35.839 CC lib/env_dpdk/pci_dpdk.o 00:02:35.839 CC lib/idxd/idxd_user.o 00:02:35.839 CC lib/vmd/vmd.o 00:02:35.839 CC lib/vmd/led.o 00:02:35.839 CC lib/idxd/idxd_kernel.o 00:02:35.839 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.839 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.839 LIB libspdk_rdma_utils.a 00:02:35.839 LIB libspdk_rdma_provider.a 00:02:35.839 SO libspdk_rdma_provider.so.6.0 00:02:35.839 SO libspdk_rdma_utils.so.1.0 00:02:36.100 LIB libspdk_conf.a 00:02:36.100 LIB libspdk_json.a 00:02:36.100 SO libspdk_conf.so.6.0 00:02:36.100 SYMLINK libspdk_rdma_provider.so 00:02:36.100 SO libspdk_json.so.6.0 00:02:36.100 SYMLINK libspdk_rdma_utils.so 00:02:36.100 SYMLINK libspdk_conf.so 00:02:36.100 SYMLINK libspdk_json.so 00:02:36.100 LIB libspdk_idxd.a 00:02:36.361 SO libspdk_idxd.so.12.0 00:02:36.361 LIB libspdk_vmd.a 00:02:36.361 SYMLINK libspdk_idxd.so 00:02:36.361 SO libspdk_vmd.so.6.0 00:02:36.361 SYMLINK libspdk_vmd.so 00:02:36.361 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.361 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.361 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.361 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.622 LIB libspdk_jsonrpc.a 00:02:36.622 SO libspdk_jsonrpc.so.6.0 00:02:36.883 SYMLINK libspdk_jsonrpc.so 00:02:36.883 LIB libspdk_env_dpdk.a 00:02:36.883 SO libspdk_env_dpdk.so.14.1 00:02:37.144 SYMLINK libspdk_env_dpdk.so 00:02:37.144 CC lib/rpc/rpc.o 00:02:37.405 LIB libspdk_rpc.a 00:02:37.405 SO libspdk_rpc.so.6.0 00:02:37.405 SYMLINK libspdk_rpc.so 00:02:37.978 CC lib/keyring/keyring.o 00:02:37.978 CC lib/keyring/keyring_rpc.o 00:02:37.978 CC lib/notify/notify.o 00:02:37.978 CC lib/notify/notify_rpc.o 00:02:37.978 CC lib/trace/trace.o 00:02:37.978 CC lib/trace/trace_flags.o 00:02:37.978 CC lib/trace/trace_rpc.o 00:02:37.978 LIB libspdk_notify.a 00:02:37.978 LIB libspdk_keyring.a 00:02:37.978 SO libspdk_notify.so.6.0 00:02:37.978 LIB libspdk_trace.a 00:02:37.978 SO libspdk_keyring.so.1.0 00:02:38.239 SO libspdk_trace.so.10.0 00:02:38.239 SYMLINK libspdk_notify.so 00:02:38.239 SYMLINK libspdk_keyring.so 00:02:38.239 SYMLINK libspdk_trace.so 00:02:38.519 CC lib/thread/iobuf.o 00:02:38.519 CC lib/thread/thread.o 00:02:38.519 CC lib/sock/sock.o 00:02:38.519 CC lib/sock/sock_rpc.o 00:02:38.795 LIB libspdk_sock.a 00:02:39.057 SO libspdk_sock.so.10.0 00:02:39.057 SYMLINK libspdk_sock.so 00:02:39.317 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.317 CC lib/nvme/nvme_ctrlr.o 00:02:39.317 CC lib/nvme/nvme_fabric.o 00:02:39.317 CC lib/nvme/nvme_pcie_common.o 00:02:39.317 CC lib/nvme/nvme_ns_cmd.o 00:02:39.317 CC lib/nvme/nvme_ns.o 00:02:39.317 CC lib/nvme/nvme_pcie.o 00:02:39.317 CC lib/nvme/nvme_qpair.o 00:02:39.317 CC lib/nvme/nvme.o 00:02:39.317 CC lib/nvme/nvme_quirks.o 00:02:39.317 CC lib/nvme/nvme_transport.o 00:02:39.317 CC lib/nvme/nvme_discovery.o 00:02:39.317 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:39.317 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:39.318 CC lib/nvme/nvme_tcp.o 00:02:39.318 CC lib/nvme/nvme_opal.o 00:02:39.318 CC lib/nvme/nvme_io_msg.o 00:02:39.318 CC lib/nvme/nvme_poll_group.o 00:02:39.318 CC lib/nvme/nvme_zns.o 00:02:39.318 CC lib/nvme/nvme_stubs.o 00:02:39.318 CC lib/nvme/nvme_auth.o 00:02:39.318 CC lib/nvme/nvme_cuse.o 00:02:39.318 CC lib/nvme/nvme_vfio_user.o 00:02:39.318 CC lib/nvme/nvme_rdma.o 00:02:39.887 LIB libspdk_thread.a 00:02:39.887 SO libspdk_thread.so.10.1 00:02:39.887 SYMLINK libspdk_thread.so 00:02:40.148 CC lib/accel/accel.o 00:02:40.148 CC lib/accel/accel_rpc.o 00:02:40.148 CC lib/init/json_config.o 00:02:40.148 CC lib/accel/accel_sw.o 00:02:40.148 CC lib/init/subsystem.o 00:02:40.148 CC lib/init/subsystem_rpc.o 00:02:40.148 CC lib/init/rpc.o 00:02:40.148 CC lib/virtio/virtio.o 00:02:40.148 CC lib/virtio/virtio_vhost_user.o 00:02:40.148 CC lib/virtio/virtio_vfio_user.o 00:02:40.148 CC lib/virtio/virtio_pci.o 00:02:40.148 CC lib/vfu_tgt/tgt_endpoint.o 00:02:40.148 CC lib/vfu_tgt/tgt_rpc.o 00:02:40.148 CC lib/blob/blobstore.o 00:02:40.148 CC lib/blob/request.o 00:02:40.148 CC lib/blob/zeroes.o 00:02:40.148 CC lib/blob/blob_bs_dev.o 00:02:40.409 LIB libspdk_init.a 00:02:40.409 LIB libspdk_vfu_tgt.a 00:02:40.409 SO libspdk_init.so.5.0 00:02:40.409 LIB libspdk_virtio.a 00:02:40.409 SO libspdk_vfu_tgt.so.3.0 00:02:40.670 SO libspdk_virtio.so.7.0 00:02:40.670 SYMLINK libspdk_init.so 00:02:40.670 SYMLINK libspdk_vfu_tgt.so 00:02:40.670 SYMLINK libspdk_virtio.so 00:02:40.931 CC lib/event/app.o 00:02:40.931 CC lib/event/log_rpc.o 00:02:40.931 CC lib/event/reactor.o 00:02:40.931 CC lib/event/app_rpc.o 00:02:40.931 CC lib/event/scheduler_static.o 00:02:40.931 LIB libspdk_accel.a 00:02:41.192 SO libspdk_accel.so.15.1 00:02:41.192 LIB libspdk_nvme.a 00:02:41.192 SYMLINK libspdk_accel.so 00:02:41.192 SO libspdk_nvme.so.13.1 00:02:41.192 LIB libspdk_event.a 00:02:41.451 SO libspdk_event.so.14.0 00:02:41.451 SYMLINK libspdk_event.so 00:02:41.451 CC lib/bdev/bdev.o 00:02:41.452 CC lib/bdev/bdev_rpc.o 00:02:41.452 CC lib/bdev/bdev_zone.o 00:02:41.452 CC lib/bdev/part.o 00:02:41.452 CC lib/bdev/scsi_nvme.o 00:02:41.452 SYMLINK libspdk_nvme.so 00:02:42.836 LIB libspdk_blob.a 00:02:42.836 SO libspdk_blob.so.11.0 00:02:42.836 SYMLINK libspdk_blob.so 00:02:43.408 CC lib/lvol/lvol.o 00:02:43.408 CC lib/blobfs/blobfs.o 00:02:43.408 CC lib/blobfs/tree.o 00:02:43.669 LIB libspdk_bdev.a 00:02:43.669 SO libspdk_bdev.so.15.1 00:02:43.931 SYMLINK libspdk_bdev.so 00:02:43.931 LIB libspdk_blobfs.a 00:02:43.931 SO libspdk_blobfs.so.10.0 00:02:44.191 LIB libspdk_lvol.a 00:02:44.191 SYMLINK libspdk_blobfs.so 00:02:44.191 SO libspdk_lvol.so.10.0 00:02:44.191 SYMLINK libspdk_lvol.so 00:02:44.191 CC lib/ftl/ftl_core.o 00:02:44.191 CC lib/ftl/ftl_init.o 00:02:44.191 CC lib/nbd/nbd.o 00:02:44.191 CC lib/nvmf/ctrlr.o 00:02:44.191 CC lib/nbd/nbd_rpc.o 00:02:44.191 CC lib/ftl/ftl_layout.o 00:02:44.191 CC lib/nvmf/ctrlr_discovery.o 00:02:44.191 CC lib/scsi/dev.o 00:02:44.191 CC lib/ftl/ftl_debug.o 00:02:44.191 CC lib/ftl/ftl_io.o 00:02:44.191 CC lib/nvmf/ctrlr_bdev.o 00:02:44.191 CC lib/scsi/lun.o 00:02:44.191 CC lib/ftl/ftl_sb.o 00:02:44.191 CC lib/nvmf/subsystem.o 00:02:44.191 CC lib/scsi/port.o 00:02:44.191 CC lib/ftl/ftl_l2p.o 00:02:44.191 CC lib/nvmf/nvmf_rpc.o 00:02:44.191 CC lib/ublk/ublk.o 00:02:44.191 CC lib/ftl/ftl_l2p_flat.o 00:02:44.191 CC lib/scsi/scsi.o 00:02:44.191 CC lib/ftl/ftl_nv_cache.o 00:02:44.191 CC lib/nvmf/nvmf.o 00:02:44.191 CC lib/ublk/ublk_rpc.o 00:02:44.191 CC lib/scsi/scsi_bdev.o 00:02:44.191 CC lib/ftl/ftl_band.o 00:02:44.191 CC lib/nvmf/transport.o 00:02:44.191 CC lib/ftl/ftl_band_ops.o 00:02:44.191 CC lib/scsi/scsi_pr.o 00:02:44.191 CC lib/nvmf/tcp.o 00:02:44.191 CC lib/ftl/ftl_writer.o 00:02:44.191 CC lib/scsi/scsi_rpc.o 00:02:44.191 CC lib/nvmf/stubs.o 00:02:44.191 CC lib/nvmf/mdns_server.o 00:02:44.191 CC lib/scsi/task.o 00:02:44.191 CC lib/ftl/ftl_rq.o 00:02:44.191 CC lib/ftl/ftl_reloc.o 00:02:44.191 CC lib/nvmf/vfio_user.o 00:02:44.191 CC lib/ftl/ftl_l2p_cache.o 00:02:44.191 CC lib/nvmf/rdma.o 00:02:44.191 CC lib/ftl/ftl_p2l.o 00:02:44.191 CC lib/nvmf/auth.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.191 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.191 CC lib/ftl/utils/ftl_md.o 00:02:44.191 CC lib/ftl/utils/ftl_conf.o 00:02:44.191 CC lib/ftl/utils/ftl_mempool.o 00:02:44.191 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.191 CC lib/ftl/utils/ftl_property.o 00:02:44.191 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.191 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.191 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.191 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.191 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.191 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.191 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.191 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.191 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.191 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.191 CC lib/ftl/base/ftl_base_dev.o 00:02:44.191 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.191 CC lib/ftl/ftl_trace.o 00:02:44.450 CC lib/ftl/base/ftl_base_bdev.o 00:02:44.709 LIB libspdk_nbd.a 00:02:44.969 SO libspdk_nbd.so.7.0 00:02:44.969 LIB libspdk_scsi.a 00:02:44.969 SO libspdk_scsi.so.9.0 00:02:44.969 SYMLINK libspdk_nbd.so 00:02:44.969 LIB libspdk_ublk.a 00:02:44.969 SYMLINK libspdk_scsi.so 00:02:44.969 SO libspdk_ublk.so.3.0 00:02:45.230 SYMLINK libspdk_ublk.so 00:02:45.230 LIB libspdk_ftl.a 00:02:45.230 CC lib/vhost/vhost.o 00:02:45.230 CC lib/vhost/vhost_rpc.o 00:02:45.230 CC lib/vhost/vhost_scsi.o 00:02:45.230 CC lib/iscsi/conn.o 00:02:45.230 CC lib/vhost/vhost_blk.o 00:02:45.230 CC lib/iscsi/init_grp.o 00:02:45.230 CC lib/vhost/rte_vhost_user.o 00:02:45.230 CC lib/iscsi/iscsi.o 00:02:45.230 CC lib/iscsi/md5.o 00:02:45.230 CC lib/iscsi/param.o 00:02:45.230 CC lib/iscsi/portal_grp.o 00:02:45.230 CC lib/iscsi/tgt_node.o 00:02:45.230 CC lib/iscsi/iscsi_subsystem.o 00:02:45.230 CC lib/iscsi/iscsi_rpc.o 00:02:45.230 CC lib/iscsi/task.o 00:02:45.489 SO libspdk_ftl.so.9.0 00:02:45.748 SYMLINK libspdk_ftl.so 00:02:46.008 LIB libspdk_nvmf.a 00:02:46.268 SO libspdk_nvmf.so.18.1 00:02:46.268 LIB libspdk_vhost.a 00:02:46.268 SO libspdk_vhost.so.8.0 00:02:46.529 SYMLINK libspdk_nvmf.so 00:02:46.529 SYMLINK libspdk_vhost.so 00:02:46.529 LIB libspdk_iscsi.a 00:02:46.529 SO libspdk_iscsi.so.8.0 00:02:46.806 SYMLINK libspdk_iscsi.so 00:02:47.378 CC module/vfu_device/vfu_virtio.o 00:02:47.378 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.378 CC module/vfu_device/vfu_virtio_blk.o 00:02:47.378 CC module/vfu_device/vfu_virtio_rpc.o 00:02:47.378 CC module/vfu_device/vfu_virtio_scsi.o 00:02:47.378 CC module/sock/posix/posix.o 00:02:47.378 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.378 CC module/accel/ioat/accel_ioat.o 00:02:47.378 LIB libspdk_env_dpdk_rpc.a 00:02:47.378 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.378 CC module/accel/iaa/accel_iaa.o 00:02:47.378 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.378 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.378 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.378 CC module/blob/bdev/blob_bdev.o 00:02:47.378 CC module/accel/dsa/accel_dsa.o 00:02:47.378 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.378 CC module/accel/error/accel_error.o 00:02:47.378 CC module/keyring/file/keyring.o 00:02:47.378 CC module/keyring/linux/keyring.o 00:02:47.378 CC module/accel/error/accel_error_rpc.o 00:02:47.378 CC module/keyring/file/keyring_rpc.o 00:02:47.378 CC module/keyring/linux/keyring_rpc.o 00:02:47.640 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.640 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.640 LIB libspdk_scheduler_gscheduler.a 00:02:47.640 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.640 LIB libspdk_accel_ioat.a 00:02:47.640 LIB libspdk_keyring_linux.a 00:02:47.640 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.640 LIB libspdk_keyring_file.a 00:02:47.640 LIB libspdk_accel_error.a 00:02:47.640 LIB libspdk_accel_iaa.a 00:02:47.640 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.640 LIB libspdk_scheduler_dynamic.a 00:02:47.640 SO libspdk_accel_ioat.so.6.0 00:02:47.640 SO libspdk_keyring_file.so.1.0 00:02:47.640 SO libspdk_keyring_linux.so.1.0 00:02:47.640 SO libspdk_accel_error.so.2.0 00:02:47.640 SO libspdk_accel_iaa.so.3.0 00:02:47.640 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.640 LIB libspdk_accel_dsa.a 00:02:47.640 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.901 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.901 LIB libspdk_blob_bdev.a 00:02:47.901 SYMLINK libspdk_keyring_file.so 00:02:47.901 SYMLINK libspdk_accel_iaa.so 00:02:47.901 SYMLINK libspdk_accel_ioat.so 00:02:47.901 SO libspdk_accel_dsa.so.5.0 00:02:47.901 SYMLINK libspdk_keyring_linux.so 00:02:47.901 SYMLINK libspdk_accel_error.so 00:02:47.901 SO libspdk_blob_bdev.so.11.0 00:02:47.901 SYMLINK libspdk_scheduler_dynamic.so 00:02:47.901 SYMLINK libspdk_blob_bdev.so 00:02:47.901 SYMLINK libspdk_accel_dsa.so 00:02:47.901 LIB libspdk_vfu_device.a 00:02:47.901 SO libspdk_vfu_device.so.3.0 00:02:48.162 SYMLINK libspdk_vfu_device.so 00:02:48.162 LIB libspdk_sock_posix.a 00:02:48.162 SO libspdk_sock_posix.so.6.0 00:02:48.423 SYMLINK libspdk_sock_posix.so 00:02:48.423 CC module/bdev/null/bdev_null.o 00:02:48.423 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.423 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.423 CC module/bdev/null/bdev_null_rpc.o 00:02:48.423 CC module/bdev/delay/vbdev_delay.o 00:02:48.423 CC module/bdev/ftl/bdev_ftl.o 00:02:48.423 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.423 CC module/bdev/split/vbdev_split.o 00:02:48.423 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.423 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.423 CC module/bdev/raid/bdev_raid.o 00:02:48.423 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.423 CC module/bdev/raid/bdev_raid_sb.o 00:02:48.423 CC module/bdev/raid/raid0.o 00:02:48.423 CC module/bdev/raid/raid1.o 00:02:48.423 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.423 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.423 CC module/bdev/raid/concat.o 00:02:48.423 CC module/bdev/gpt/gpt.o 00:02:48.423 CC module/bdev/malloc/bdev_malloc.o 00:02:48.423 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.423 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.423 CC module/bdev/error/vbdev_error.o 00:02:48.423 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.423 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.423 CC module/bdev/nvme/bdev_nvme.o 00:02:48.423 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.423 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.423 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.423 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.423 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.423 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.423 CC module/bdev/nvme/nvme_rpc.o 00:02:48.423 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.423 CC module/bdev/nvme/vbdev_opal.o 00:02:48.423 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:48.423 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:48.423 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.423 CC module/bdev/aio/bdev_aio.o 00:02:48.423 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.423 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.423 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.683 LIB libspdk_bdev_error.a 00:02:48.683 LIB libspdk_blobfs_bdev.a 00:02:48.683 LIB libspdk_bdev_null.a 00:02:48.683 SO libspdk_blobfs_bdev.so.6.0 00:02:48.683 LIB libspdk_bdev_gpt.a 00:02:48.683 SO libspdk_bdev_error.so.6.0 00:02:48.683 SO libspdk_bdev_null.so.6.0 00:02:48.683 LIB libspdk_bdev_split.a 00:02:48.683 LIB libspdk_bdev_ftl.a 00:02:48.683 LIB libspdk_bdev_passthru.a 00:02:48.683 SO libspdk_bdev_gpt.so.6.0 00:02:48.683 LIB libspdk_bdev_zone_block.a 00:02:48.683 SO libspdk_bdev_split.so.6.0 00:02:48.683 SYMLINK libspdk_bdev_error.so 00:02:48.683 SO libspdk_bdev_ftl.so.6.0 00:02:48.683 SYMLINK libspdk_blobfs_bdev.so 00:02:48.683 SYMLINK libspdk_bdev_null.so 00:02:48.684 SO libspdk_bdev_zone_block.so.6.0 00:02:48.945 SO libspdk_bdev_passthru.so.6.0 00:02:48.945 LIB libspdk_bdev_delay.a 00:02:48.945 LIB libspdk_bdev_aio.a 00:02:48.945 LIB libspdk_bdev_malloc.a 00:02:48.945 LIB libspdk_bdev_iscsi.a 00:02:48.945 SYMLINK libspdk_bdev_split.so 00:02:48.945 SYMLINK libspdk_bdev_gpt.so 00:02:48.945 SYMLINK libspdk_bdev_ftl.so 00:02:48.945 SO libspdk_bdev_delay.so.6.0 00:02:48.945 SO libspdk_bdev_aio.so.6.0 00:02:48.945 SO libspdk_bdev_malloc.so.6.0 00:02:48.945 SYMLINK libspdk_bdev_zone_block.so 00:02:48.945 SO libspdk_bdev_iscsi.so.6.0 00:02:48.945 SYMLINK libspdk_bdev_passthru.so 00:02:48.945 SYMLINK libspdk_bdev_delay.so 00:02:48.945 LIB libspdk_bdev_lvol.a 00:02:48.945 SYMLINK libspdk_bdev_aio.so 00:02:48.945 SYMLINK libspdk_bdev_malloc.so 00:02:48.945 SYMLINK libspdk_bdev_iscsi.so 00:02:48.945 LIB libspdk_bdev_virtio.a 00:02:48.945 SO libspdk_bdev_lvol.so.6.0 00:02:48.945 SO libspdk_bdev_virtio.so.6.0 00:02:48.945 SYMLINK libspdk_bdev_lvol.so 00:02:49.207 SYMLINK libspdk_bdev_virtio.so 00:02:49.207 LIB libspdk_bdev_raid.a 00:02:49.469 SO libspdk_bdev_raid.so.6.0 00:02:49.469 SYMLINK libspdk_bdev_raid.so 00:02:50.412 LIB libspdk_bdev_nvme.a 00:02:50.412 SO libspdk_bdev_nvme.so.7.0 00:02:50.673 SYMLINK libspdk_bdev_nvme.so 00:02:51.246 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.246 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.246 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.246 CC module/event/subsystems/sock/sock.o 00:02:51.246 CC module/event/subsystems/vmd/vmd.o 00:02:51.246 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.246 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.246 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:51.246 CC module/event/subsystems/keyring/keyring.o 00:02:51.507 LIB libspdk_event_scheduler.a 00:02:51.507 LIB libspdk_event_keyring.a 00:02:51.507 LIB libspdk_event_iobuf.a 00:02:51.507 LIB libspdk_event_vhost_blk.a 00:02:51.507 LIB libspdk_event_sock.a 00:02:51.507 LIB libspdk_event_vmd.a 00:02:51.507 LIB libspdk_event_vfu_tgt.a 00:02:51.507 SO libspdk_event_keyring.so.1.0 00:02:51.507 SO libspdk_event_scheduler.so.4.0 00:02:51.507 SO libspdk_event_sock.so.5.0 00:02:51.507 SO libspdk_event_vhost_blk.so.3.0 00:02:51.507 SO libspdk_event_iobuf.so.3.0 00:02:51.507 SO libspdk_event_vmd.so.6.0 00:02:51.507 SO libspdk_event_vfu_tgt.so.3.0 00:02:51.507 SYMLINK libspdk_event_keyring.so 00:02:51.507 SYMLINK libspdk_event_sock.so 00:02:51.507 SYMLINK libspdk_event_scheduler.so 00:02:51.507 SYMLINK libspdk_event_vhost_blk.so 00:02:51.507 SYMLINK libspdk_event_vfu_tgt.so 00:02:51.507 SYMLINK libspdk_event_vmd.so 00:02:51.507 SYMLINK libspdk_event_iobuf.so 00:02:52.080 CC module/event/subsystems/accel/accel.o 00:02:52.080 LIB libspdk_event_accel.a 00:02:52.080 SO libspdk_event_accel.so.6.0 00:02:52.080 SYMLINK libspdk_event_accel.so 00:02:52.653 CC module/event/subsystems/bdev/bdev.o 00:02:52.653 LIB libspdk_event_bdev.a 00:02:52.653 SO libspdk_event_bdev.so.6.0 00:02:52.914 SYMLINK libspdk_event_bdev.so 00:02:53.175 CC module/event/subsystems/scsi/scsi.o 00:02:53.176 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.176 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.176 CC module/event/subsystems/nbd/nbd.o 00:02:53.176 CC module/event/subsystems/ublk/ublk.o 00:02:53.176 LIB libspdk_event_scsi.a 00:02:53.176 SO libspdk_event_scsi.so.6.0 00:02:53.176 LIB libspdk_event_nbd.a 00:02:53.436 LIB libspdk_event_ublk.a 00:02:53.436 SO libspdk_event_nbd.so.6.0 00:02:53.436 SYMLINK libspdk_event_scsi.so 00:02:53.436 SO libspdk_event_ublk.so.3.0 00:02:53.436 LIB libspdk_event_nvmf.a 00:02:53.436 SYMLINK libspdk_event_nbd.so 00:02:53.436 SO libspdk_event_nvmf.so.6.0 00:02:53.436 SYMLINK libspdk_event_ublk.so 00:02:53.436 SYMLINK libspdk_event_nvmf.so 00:02:53.725 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.725 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.986 LIB libspdk_event_vhost_scsi.a 00:02:53.986 LIB libspdk_event_iscsi.a 00:02:53.986 SO libspdk_event_vhost_scsi.so.3.0 00:02:53.986 SO libspdk_event_iscsi.so.6.0 00:02:53.986 SYMLINK libspdk_event_vhost_scsi.so 00:02:53.986 SYMLINK libspdk_event_iscsi.so 00:02:54.248 SO libspdk.so.6.0 00:02:54.248 SYMLINK libspdk.so 00:02:54.511 CXX app/trace/trace.o 00:02:54.511 CC app/trace_record/trace_record.o 00:02:54.511 CC test/rpc_client/rpc_client_test.o 00:02:54.511 CC app/spdk_nvme_perf/perf.o 00:02:54.511 CC app/spdk_lspci/spdk_lspci.o 00:02:54.511 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.511 CC app/spdk_top/spdk_top.o 00:02:54.511 CC app/spdk_nvme_identify/identify.o 00:02:54.511 TEST_HEADER include/spdk/accel.h 00:02:54.511 TEST_HEADER include/spdk/accel_module.h 00:02:54.511 TEST_HEADER include/spdk/assert.h 00:02:54.511 TEST_HEADER include/spdk/barrier.h 00:02:54.511 TEST_HEADER include/spdk/base64.h 00:02:54.511 TEST_HEADER include/spdk/bdev.h 00:02:54.511 TEST_HEADER include/spdk/bdev_module.h 00:02:54.511 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.511 TEST_HEADER include/spdk/bit_array.h 00:02:54.511 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.511 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.511 TEST_HEADER include/spdk/bit_pool.h 00:02:54.511 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.511 TEST_HEADER include/spdk/blobfs.h 00:02:54.511 TEST_HEADER include/spdk/blob.h 00:02:54.511 TEST_HEADER include/spdk/config.h 00:02:54.511 TEST_HEADER include/spdk/conf.h 00:02:54.511 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.511 TEST_HEADER include/spdk/cpuset.h 00:02:54.511 TEST_HEADER include/spdk/crc16.h 00:02:54.511 TEST_HEADER include/spdk/crc32.h 00:02:54.511 TEST_HEADER include/spdk/crc64.h 00:02:54.511 TEST_HEADER include/spdk/dif.h 00:02:54.511 TEST_HEADER include/spdk/dma.h 00:02:54.511 TEST_HEADER include/spdk/endian.h 00:02:54.511 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.511 TEST_HEADER include/spdk/env.h 00:02:54.511 CC app/spdk_dd/spdk_dd.o 00:02:54.511 TEST_HEADER include/spdk/event.h 00:02:54.511 TEST_HEADER include/spdk/fd.h 00:02:54.511 TEST_HEADER include/spdk/fd_group.h 00:02:54.771 TEST_HEADER include/spdk/file.h 00:02:54.771 TEST_HEADER include/spdk/ftl.h 00:02:54.771 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.771 TEST_HEADER include/spdk/hexlify.h 00:02:54.771 TEST_HEADER include/spdk/histogram_data.h 00:02:54.771 TEST_HEADER include/spdk/idxd.h 00:02:54.771 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.771 TEST_HEADER include/spdk/init.h 00:02:54.771 TEST_HEADER include/spdk/ioat.h 00:02:54.771 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.771 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.771 TEST_HEADER include/spdk/json.h 00:02:54.771 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.771 TEST_HEADER include/spdk/keyring.h 00:02:54.771 TEST_HEADER include/spdk/keyring_module.h 00:02:54.771 TEST_HEADER include/spdk/likely.h 00:02:54.771 TEST_HEADER include/spdk/log.h 00:02:54.771 TEST_HEADER include/spdk/lvol.h 00:02:54.771 TEST_HEADER include/spdk/nbd.h 00:02:54.771 TEST_HEADER include/spdk/memory.h 00:02:54.771 TEST_HEADER include/spdk/mmio.h 00:02:54.771 TEST_HEADER include/spdk/notify.h 00:02:54.771 TEST_HEADER include/spdk/nvme.h 00:02:54.771 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.771 CC app/nvmf_tgt/nvmf_main.o 00:02:54.771 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.771 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.771 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.771 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.771 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.771 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.771 TEST_HEADER include/spdk/nvmf.h 00:02:54.771 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.771 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.771 TEST_HEADER include/spdk/opal.h 00:02:54.771 TEST_HEADER include/spdk/opal_spec.h 00:02:54.771 TEST_HEADER include/spdk/pci_ids.h 00:02:54.771 TEST_HEADER include/spdk/pipe.h 00:02:54.771 TEST_HEADER include/spdk/reduce.h 00:02:54.771 TEST_HEADER include/spdk/queue.h 00:02:54.771 TEST_HEADER include/spdk/scheduler.h 00:02:54.771 TEST_HEADER include/spdk/rpc.h 00:02:54.771 TEST_HEADER include/spdk/scsi.h 00:02:54.771 TEST_HEADER include/spdk/sock.h 00:02:54.771 CC app/spdk_tgt/spdk_tgt.o 00:02:54.771 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.771 TEST_HEADER include/spdk/stdinc.h 00:02:54.771 TEST_HEADER include/spdk/string.h 00:02:54.771 TEST_HEADER include/spdk/thread.h 00:02:54.771 TEST_HEADER include/spdk/trace.h 00:02:54.771 TEST_HEADER include/spdk/tree.h 00:02:54.771 TEST_HEADER include/spdk/trace_parser.h 00:02:54.771 TEST_HEADER include/spdk/util.h 00:02:54.771 TEST_HEADER include/spdk/ublk.h 00:02:54.771 TEST_HEADER include/spdk/uuid.h 00:02:54.771 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.771 TEST_HEADER include/spdk/version.h 00:02:54.771 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.771 TEST_HEADER include/spdk/vhost.h 00:02:54.771 TEST_HEADER include/spdk/zipf.h 00:02:54.771 TEST_HEADER include/spdk/vmd.h 00:02:54.771 TEST_HEADER include/spdk/xor.h 00:02:54.771 CXX test/cpp_headers/accel.o 00:02:54.771 CXX test/cpp_headers/accel_module.o 00:02:54.771 CXX test/cpp_headers/barrier.o 00:02:54.771 CXX test/cpp_headers/assert.o 00:02:54.771 CXX test/cpp_headers/base64.o 00:02:54.771 CXX test/cpp_headers/bdev.o 00:02:54.771 CXX test/cpp_headers/bdev_module.o 00:02:54.771 CXX test/cpp_headers/bdev_zone.o 00:02:54.771 CXX test/cpp_headers/bit_array.o 00:02:54.771 CXX test/cpp_headers/bit_pool.o 00:02:54.771 CXX test/cpp_headers/blob_bdev.o 00:02:54.771 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.771 CXX test/cpp_headers/blobfs.o 00:02:54.771 CXX test/cpp_headers/blob.o 00:02:54.771 CXX test/cpp_headers/conf.o 00:02:54.771 CXX test/cpp_headers/config.o 00:02:54.771 CXX test/cpp_headers/cpuset.o 00:02:54.771 CXX test/cpp_headers/crc16.o 00:02:54.771 CXX test/cpp_headers/crc32.o 00:02:54.771 CXX test/cpp_headers/crc64.o 00:02:54.771 CXX test/cpp_headers/dif.o 00:02:54.771 CXX test/cpp_headers/dma.o 00:02:54.771 CXX test/cpp_headers/endian.o 00:02:54.771 CXX test/cpp_headers/env_dpdk.o 00:02:54.771 CXX test/cpp_headers/env.o 00:02:54.771 CXX test/cpp_headers/fd_group.o 00:02:54.771 CXX test/cpp_headers/fd.o 00:02:54.771 CXX test/cpp_headers/event.o 00:02:54.771 CXX test/cpp_headers/file.o 00:02:54.771 CXX test/cpp_headers/ftl.o 00:02:54.771 CXX test/cpp_headers/hexlify.o 00:02:54.771 CXX test/cpp_headers/gpt_spec.o 00:02:54.771 CXX test/cpp_headers/histogram_data.o 00:02:54.771 CXX test/cpp_headers/idxd.o 00:02:54.771 CXX test/cpp_headers/ioat_spec.o 00:02:54.771 CXX test/cpp_headers/iscsi_spec.o 00:02:54.771 CXX test/cpp_headers/idxd_spec.o 00:02:54.771 CXX test/cpp_headers/init.o 00:02:54.771 CXX test/cpp_headers/json.o 00:02:54.771 CXX test/cpp_headers/ioat.o 00:02:54.771 CXX test/cpp_headers/jsonrpc.o 00:02:54.771 CXX test/cpp_headers/log.o 00:02:54.771 CXX test/cpp_headers/likely.o 00:02:54.771 CXX test/cpp_headers/keyring_module.o 00:02:54.771 CXX test/cpp_headers/keyring.o 00:02:54.771 CXX test/cpp_headers/memory.o 00:02:54.771 CXX test/cpp_headers/lvol.o 00:02:54.771 CXX test/cpp_headers/mmio.o 00:02:54.771 LINK spdk_lspci 00:02:54.771 CXX test/cpp_headers/nbd.o 00:02:54.771 CXX test/cpp_headers/notify.o 00:02:54.771 CC examples/ioat/perf/perf.o 00:02:54.771 CXX test/cpp_headers/nvme_spec.o 00:02:54.771 CXX test/cpp_headers/nvme_ocssd.o 00:02:54.771 CXX test/cpp_headers/nvme.o 00:02:54.771 CXX test/cpp_headers/nvme_zns.o 00:02:54.771 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:54.771 CXX test/cpp_headers/nvmf_cmd.o 00:02:54.771 CXX test/cpp_headers/nvmf.o 00:02:54.771 CXX test/cpp_headers/nvme_intel.o 00:02:54.771 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:54.771 CXX test/cpp_headers/nvmf_spec.o 00:02:54.771 CC test/app/jsoncat/jsoncat.o 00:02:54.771 CC test/app/histogram_perf/histogram_perf.o 00:02:54.771 CC app/fio/nvme/fio_plugin.o 00:02:54.772 CXX test/cpp_headers/opal_spec.o 00:02:54.772 CC test/thread/poller_perf/poller_perf.o 00:02:54.772 CXX test/cpp_headers/pci_ids.o 00:02:54.772 CXX test/cpp_headers/nvmf_transport.o 00:02:54.772 CXX test/cpp_headers/opal.o 00:02:54.772 CC test/env/vtophys/vtophys.o 00:02:54.772 CXX test/cpp_headers/queue.o 00:02:54.772 CXX test/cpp_headers/reduce.o 00:02:54.772 CXX test/cpp_headers/pipe.o 00:02:54.772 CXX test/cpp_headers/scsi.o 00:02:54.772 CXX test/cpp_headers/rpc.o 00:02:54.772 CXX test/cpp_headers/scsi_spec.o 00:02:54.772 CXX test/cpp_headers/scheduler.o 00:02:54.772 CC examples/util/zipf/zipf.o 00:02:54.772 CXX test/cpp_headers/stdinc.o 00:02:54.772 CXX test/cpp_headers/string.o 00:02:54.772 CXX test/cpp_headers/sock.o 00:02:54.772 CXX test/cpp_headers/thread.o 00:02:54.772 CXX test/cpp_headers/tree.o 00:02:54.772 CXX test/cpp_headers/trace.o 00:02:54.772 CC test/env/pci/pci_ut.o 00:02:54.772 CXX test/cpp_headers/trace_parser.o 00:02:54.772 CXX test/cpp_headers/uuid.o 00:02:54.772 CC examples/ioat/verify/verify.o 00:02:54.772 CXX test/cpp_headers/ublk.o 00:02:54.772 CXX test/cpp_headers/vfio_user_spec.o 00:02:54.772 CXX test/cpp_headers/util.o 00:02:54.772 CXX test/cpp_headers/version.o 00:02:54.772 CXX test/cpp_headers/vfio_user_pci.o 00:02:54.772 CXX test/cpp_headers/zipf.o 00:02:55.033 CXX test/cpp_headers/vhost.o 00:02:55.033 CXX test/cpp_headers/vmd.o 00:02:55.033 CXX test/cpp_headers/xor.o 00:02:55.033 CC test/app/stub/stub.o 00:02:55.033 CC test/env/memory/memory_ut.o 00:02:55.033 CC test/app/bdev_svc/bdev_svc.o 00:02:55.033 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.033 LINK spdk_nvme_discover 00:02:55.033 LINK rpc_client_test 00:02:55.033 LINK interrupt_tgt 00:02:55.033 CC app/fio/bdev/fio_plugin.o 00:02:55.033 CC test/dma/test_dma/test_dma.o 00:02:55.033 LINK iscsi_tgt 00:02:55.033 LINK spdk_trace_record 00:02:55.292 LINK nvmf_tgt 00:02:55.292 LINK poller_perf 00:02:55.292 LINK spdk_tgt 00:02:55.292 LINK ioat_perf 00:02:55.292 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.292 LINK spdk_trace 00:02:55.292 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.293 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.551 LINK spdk_dd 00:02:55.551 LINK vtophys 00:02:55.551 LINK jsoncat 00:02:55.551 LINK histogram_perf 00:02:55.551 LINK zipf 00:02:55.551 LINK env_dpdk_post_init 00:02:55.551 LINK stub 00:02:55.551 LINK bdev_svc 00:02:55.811 LINK verify 00:02:55.812 CC app/vhost/vhost.o 00:02:55.812 LINK test_dma 00:02:55.812 CC test/event/reactor/reactor.o 00:02:55.812 LINK pci_ut 00:02:55.812 CC test/event/reactor_perf/reactor_perf.o 00:02:55.812 CC test/event/event_perf/event_perf.o 00:02:55.812 CC test/event/app_repeat/app_repeat.o 00:02:55.812 CC test/event/scheduler/scheduler.o 00:02:55.812 LINK vhost_fuzz 00:02:55.812 LINK nvme_fuzz 00:02:55.812 LINK spdk_bdev 00:02:55.812 LINK spdk_nvme 00:02:56.073 LINK spdk_nvme_identify 00:02:56.073 LINK reactor 00:02:56.073 LINK spdk_nvme_perf 00:02:56.073 LINK reactor_perf 00:02:56.073 LINK vhost 00:02:56.073 LINK event_perf 00:02:56.073 CC examples/vmd/led/led.o 00:02:56.073 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.073 CC examples/idxd/perf/perf.o 00:02:56.073 CC examples/sock/hello_world/hello_sock.o 00:02:56.073 LINK app_repeat 00:02:56.073 LINK mem_callbacks 00:02:56.073 LINK spdk_top 00:02:56.073 CC examples/thread/thread/thread_ex.o 00:02:56.073 LINK scheduler 00:02:56.332 LINK lsvmd 00:02:56.332 LINK led 00:02:56.332 LINK hello_sock 00:02:56.332 CC test/nvme/compliance/nvme_compliance.o 00:02:56.332 CC test/nvme/aer/aer.o 00:02:56.332 CC test/nvme/startup/startup.o 00:02:56.332 CC test/nvme/fused_ordering/fused_ordering.o 00:02:56.332 CC test/nvme/err_injection/err_injection.o 00:02:56.332 CC test/nvme/sgl/sgl.o 00:02:56.332 CC test/nvme/reset/reset.o 00:02:56.332 CC test/nvme/connect_stress/connect_stress.o 00:02:56.332 CC test/nvme/e2edp/nvme_dp.o 00:02:56.332 CC test/nvme/cuse/cuse.o 00:02:56.332 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:56.332 CC test/nvme/reserve/reserve.o 00:02:56.332 CC test/nvme/fdp/fdp.o 00:02:56.332 CC test/nvme/overhead/overhead.o 00:02:56.332 CC test/nvme/boot_partition/boot_partition.o 00:02:56.333 LINK idxd_perf 00:02:56.333 CC test/nvme/simple_copy/simple_copy.o 00:02:56.333 CC test/blobfs/mkfs/mkfs.o 00:02:56.333 LINK thread 00:02:56.333 CC test/accel/dif/dif.o 00:02:56.599 LINK memory_ut 00:02:56.599 LINK boot_partition 00:02:56.599 CC test/lvol/esnap/esnap.o 00:02:56.599 LINK err_injection 00:02:56.599 LINK doorbell_aers 00:02:56.599 LINK startup 00:02:56.599 LINK connect_stress 00:02:56.599 LINK fused_ordering 00:02:56.599 LINK reserve 00:02:56.599 LINK mkfs 00:02:56.599 LINK simple_copy 00:02:56.599 LINK reset 00:02:56.599 LINK nvme_dp 00:02:56.599 LINK sgl 00:02:56.599 LINK nvme_compliance 00:02:56.600 LINK overhead 00:02:56.600 LINK aer 00:02:56.600 LINK fdp 00:02:56.860 CC examples/nvme/abort/abort.o 00:02:56.860 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:56.860 CC examples/nvme/hotplug/hotplug.o 00:02:56.860 CC examples/nvme/hello_world/hello_world.o 00:02:56.860 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.860 CC examples/nvme/reconnect/reconnect.o 00:02:56.860 CC examples/nvme/arbitration/arbitration.o 00:02:56.860 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:56.860 LINK dif 00:02:56.860 LINK iscsi_fuzz 00:02:56.860 CC examples/accel/perf/accel_perf.o 00:02:57.121 CC examples/blob/hello_world/hello_blob.o 00:02:57.121 CC examples/blob/cli/blobcli.o 00:02:57.121 LINK pmr_persistence 00:02:57.121 LINK cmb_copy 00:02:57.121 LINK hello_world 00:02:57.121 LINK hotplug 00:02:57.121 LINK abort 00:02:57.121 LINK reconnect 00:02:57.121 LINK arbitration 00:02:57.382 LINK hello_blob 00:02:57.382 LINK nvme_manage 00:02:57.382 LINK accel_perf 00:02:57.382 CC test/bdev/bdevio/bdevio.o 00:02:57.382 LINK blobcli 00:02:57.644 LINK cuse 00:02:57.906 LINK bdevio 00:02:57.906 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.906 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.167 LINK hello_bdev 00:02:58.740 LINK bdevperf 00:02:59.311 CC examples/nvmf/nvmf/nvmf.o 00:02:59.592 LINK nvmf 00:03:00.533 LINK esnap 00:03:01.109 00:03:01.109 real 0m51.360s 00:03:01.109 user 6m32.652s 00:03:01.109 sys 4m34.190s 00:03:01.109 11:14:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:01.109 11:14:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.109 ************************************ 00:03:01.109 END TEST make 00:03:01.109 ************************************ 00:03:01.109 11:14:29 -- common/autotest_common.sh@1142 -- $ return 0 00:03:01.109 11:14:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.109 11:14:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.109 11:14:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.109 11:14:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.109 11:14:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.109 11:14:29 -- pm/common@44 -- $ pid=3209205 00:03:01.109 11:14:29 -- pm/common@50 -- $ kill -TERM 3209205 00:03:01.109 11:14:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.109 11:14:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.109 11:14:29 -- pm/common@44 -- $ pid=3209206 00:03:01.109 11:14:29 -- pm/common@50 -- $ kill -TERM 3209206 00:03:01.109 11:14:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.109 11:14:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:01.109 11:14:29 -- pm/common@44 -- $ pid=3209208 00:03:01.109 11:14:29 -- pm/common@50 -- $ kill -TERM 3209208 00:03:01.109 11:14:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.109 11:14:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:01.109 11:14:29 -- pm/common@44 -- $ pid=3209232 00:03:01.109 11:14:29 -- pm/common@50 -- $ sudo -E kill -TERM 3209232 00:03:01.109 11:14:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:01.109 11:14:29 -- nvmf/common.sh@7 -- # uname -s 00:03:01.109 11:14:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.109 11:14:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.109 11:14:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.109 11:14:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.109 11:14:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.109 11:14:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.109 11:14:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.109 11:14:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.109 11:14:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.109 11:14:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.371 11:14:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:01.371 11:14:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:01.371 11:14:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.371 11:14:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.371 11:14:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:01.371 11:14:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.371 11:14:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:01.371 11:14:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.371 11:14:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.371 11:14:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.371 11:14:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.371 11:14:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.371 11:14:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.371 11:14:29 -- paths/export.sh@5 -- # export PATH 00:03:01.371 11:14:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.371 11:14:29 -- nvmf/common.sh@47 -- # : 0 00:03:01.371 11:14:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:01.371 11:14:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:01.371 11:14:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.371 11:14:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.371 11:14:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.371 11:14:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:01.371 11:14:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:01.371 11:14:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:01.371 11:14:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.371 11:14:29 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.371 11:14:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.371 11:14:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.371 11:14:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.371 11:14:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.371 11:14:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.371 11:14:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.371 11:14:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.371 11:14:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.371 11:14:29 -- spdk/autotest.sh@48 -- # udevadm_pid=3272344 00:03:01.371 11:14:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.371 11:14:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.371 11:14:29 -- pm/common@17 -- # local monitor 00:03:01.371 11:14:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.371 11:14:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.371 11:14:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.371 11:14:29 -- pm/common@21 -- # date +%s 00:03:01.371 11:14:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.371 11:14:29 -- pm/common@21 -- # date +%s 00:03:01.371 11:14:29 -- pm/common@25 -- # sleep 1 00:03:01.371 11:14:29 -- pm/common@21 -- # date +%s 00:03:01.371 11:14:29 -- pm/common@21 -- # date +%s 00:03:01.371 11:14:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034869 00:03:01.371 11:14:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034869 00:03:01.371 11:14:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034869 00:03:01.371 11:14:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721034869 00:03:01.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034869_collect-vmstat.pm.log 00:03:01.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034869_collect-cpu-load.pm.log 00:03:01.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034869_collect-cpu-temp.pm.log 00:03:01.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721034869_collect-bmc-pm.bmc.pm.log 00:03:02.311 11:14:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.311 11:14:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.311 11:14:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:02.311 11:14:30 -- common/autotest_common.sh@10 -- # set +x 00:03:02.311 11:14:30 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.311 11:14:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:02.311 11:14:30 -- common/autotest_common.sh@10 -- # set +x 00:03:02.311 11:14:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:02.311 11:14:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.311 11:14:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.311 11:14:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:02.311 11:14:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.311 11:14:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.311 11:14:30 -- common/autotest_common.sh@1455 -- # uname 00:03:02.311 11:14:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:02.311 11:14:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.311 11:14:30 -- common/autotest_common.sh@1475 -- # uname 00:03:02.311 11:14:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:02.311 11:14:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:02.311 11:14:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:02.311 11:14:30 -- spdk/autotest.sh@72 -- # hash lcov 00:03:02.311 11:14:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:02.311 11:14:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:02.311 --rc lcov_branch_coverage=1 00:03:02.311 --rc lcov_function_coverage=1 00:03:02.311 --rc genhtml_branch_coverage=1 00:03:02.311 --rc genhtml_function_coverage=1 00:03:02.311 --rc genhtml_legend=1 00:03:02.311 --rc geninfo_all_blocks=1 00:03:02.311 ' 00:03:02.311 11:14:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:02.311 --rc lcov_branch_coverage=1 00:03:02.311 --rc lcov_function_coverage=1 00:03:02.311 --rc genhtml_branch_coverage=1 00:03:02.311 --rc genhtml_function_coverage=1 00:03:02.311 --rc genhtml_legend=1 00:03:02.311 --rc geninfo_all_blocks=1 00:03:02.311 ' 00:03:02.311 11:14:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:02.311 --rc lcov_branch_coverage=1 00:03:02.311 --rc lcov_function_coverage=1 00:03:02.311 --rc genhtml_branch_coverage=1 00:03:02.311 --rc genhtml_function_coverage=1 00:03:02.311 --rc genhtml_legend=1 00:03:02.311 --rc geninfo_all_blocks=1 00:03:02.311 --no-external' 00:03:02.311 11:14:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:02.311 --rc lcov_branch_coverage=1 00:03:02.311 --rc lcov_function_coverage=1 00:03:02.311 --rc genhtml_branch_coverage=1 00:03:02.311 --rc genhtml_function_coverage=1 00:03:02.311 --rc genhtml_legend=1 00:03:02.311 --rc geninfo_all_blocks=1 00:03:02.311 --no-external' 00:03:02.311 11:14:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:02.571 lcov: LCOV version 1.14 00:03:02.571 11:14:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:06.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:07.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:07.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:07.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:07.299 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:07.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:07.299 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:07.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:07.299 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:07.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:07.300 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:07.561 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:25.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:32.301 11:15:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:32.301 11:15:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:32.301 11:15:00 -- common/autotest_common.sh@10 -- # set +x 00:03:32.301 11:15:00 -- spdk/autotest.sh@91 -- # rm -f 00:03:32.301 11:15:00 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.848 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:34.848 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:34.848 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:34.848 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:35.108 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:35.108 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:35.367 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:35.367 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:35.628 11:15:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:35.628 11:15:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:35.628 11:15:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:35.628 11:15:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:35.628 11:15:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.628 11:15:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:35.628 11:15:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:35.628 11:15:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.628 11:15:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.628 11:15:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:35.628 11:15:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.628 11:15:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.628 11:15:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:35.628 11:15:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:35.628 11:15:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.628 No valid GPT data, bailing 00:03:35.628 11:15:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.628 11:15:04 -- scripts/common.sh@391 -- # pt= 00:03:35.628 11:15:04 -- scripts/common.sh@392 -- # return 1 00:03:35.628 11:15:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.628 1+0 records in 00:03:35.628 1+0 records out 00:03:35.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0024132 s, 435 MB/s 00:03:35.628 11:15:04 -- spdk/autotest.sh@118 -- # sync 00:03:35.628 11:15:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.628 11:15:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.628 11:15:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.770 11:15:12 -- spdk/autotest.sh@124 -- # uname -s 00:03:43.770 11:15:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:43.770 11:15:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.770 11:15:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.770 11:15:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.770 11:15:12 -- common/autotest_common.sh@10 -- # set +x 00:03:43.770 ************************************ 00:03:43.770 START TEST setup.sh 00:03:43.770 ************************************ 00:03:43.770 11:15:12 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.770 * Looking for test storage... 00:03:43.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.770 11:15:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:43.770 11:15:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:43.770 11:15:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.770 11:15:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.770 11:15:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.770 11:15:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.770 ************************************ 00:03:43.770 START TEST acl 00:03:43.770 ************************************ 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.770 * Looking for test storage... 00:03:43.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.770 11:15:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.770 11:15:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.770 11:15:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:43.770 11:15:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:43.770 11:15:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:43.770 11:15:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:43.770 11:15:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:43.770 11:15:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.770 11:15:12 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.978 11:15:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.978 11:15:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.978 11:15:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.978 11:15:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.978 11:15:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.978 11:15:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.283 Hugepages 00:03:51.283 node hugesize free / total 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.283 00:03:51.283 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.283 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:51.284 11:15:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:51.284 11:15:19 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.284 11:15:19 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.284 11:15:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.284 ************************************ 00:03:51.284 START TEST denied 00:03:51.284 ************************************ 00:03:51.284 11:15:19 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:51.284 11:15:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:51.284 11:15:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:51.284 11:15:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:51.284 11:15:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.284 11:15:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.535 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.535 11:15:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.821 00:04:00.821 real 0m8.616s 00:04:00.821 user 0m2.920s 00:04:00.821 sys 0m5.006s 00:04:00.821 11:15:28 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.821 11:15:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:00.821 ************************************ 00:04:00.821 END TEST denied 00:04:00.821 ************************************ 00:04:00.821 11:15:28 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:00.821 11:15:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.821 11:15:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.821 11:15:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.821 11:15:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:00.821 ************************************ 00:04:00.821 START TEST allowed 00:04:00.821 ************************************ 00:04:00.821 11:15:28 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:00.821 11:15:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:00.821 11:15:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:00.821 11:15:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:00.821 11:15:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.821 11:15:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.105 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:06.105 11:15:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:06.105 11:15:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:06.105 11:15:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:06.105 11:15:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.105 11:15:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.308 00:04:10.308 real 0m9.546s 00:04:10.308 user 0m2.886s 00:04:10.308 sys 0m4.928s 00:04:10.308 11:15:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.308 11:15:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:10.308 ************************************ 00:04:10.308 END TEST allowed 00:04:10.308 ************************************ 00:04:10.308 11:15:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:10.308 00:04:10.308 real 0m25.950s 00:04:10.308 user 0m8.726s 00:04:10.308 sys 0m14.996s 00:04:10.308 11:15:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.308 11:15:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:10.308 ************************************ 00:04:10.308 END TEST acl 00:04:10.308 ************************************ 00:04:10.308 11:15:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.308 11:15:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:10.308 11:15:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.308 11:15:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.308 11:15:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.308 ************************************ 00:04:10.308 START TEST hugepages 00:04:10.308 ************************************ 00:04:10.308 11:15:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:10.308 * Looking for test storage... 00:04:10.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.308 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102905948 kB' 'MemAvailable: 106390460 kB' 'Buffers: 2704 kB' 'Cached: 14449632 kB' 'SwapCached: 0 kB' 'Active: 11490720 kB' 'Inactive: 3523448 kB' 'Active(anon): 11016536 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565164 kB' 'Mapped: 176228 kB' 'Shmem: 10454704 kB' 'KReclaimable: 527824 kB' 'Slab: 1382760 kB' 'SReclaimable: 527824 kB' 'SUnreclaim: 854936 kB' 'KernelStack: 27216 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12598244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.309 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:10.310 11:15:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:10.310 11:15:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.310 11:15:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.310 11:15:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.310 ************************************ 00:04:10.310 START TEST default_setup 00:04:10.310 ************************************ 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.310 11:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.609 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:13.609 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105083740 kB' 'MemAvailable: 108568268 kB' 'Buffers: 2704 kB' 'Cached: 14449748 kB' 'SwapCached: 0 kB' 'Active: 11508892 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034708 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582840 kB' 'Mapped: 176076 kB' 'Shmem: 10454820 kB' 'KReclaimable: 527840 kB' 'Slab: 1380040 kB' 'SReclaimable: 527840 kB' 'SUnreclaim: 852200 kB' 'KernelStack: 27360 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12616364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235396 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.875 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.876 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105084912 kB' 'MemAvailable: 108569440 kB' 'Buffers: 2704 kB' 'Cached: 14449752 kB' 'SwapCached: 0 kB' 'Active: 11508108 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033924 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582116 kB' 'Mapped: 176044 kB' 'Shmem: 10454824 kB' 'KReclaimable: 527840 kB' 'Slab: 1380008 kB' 'SReclaimable: 527840 kB' 'SUnreclaim: 852168 kB' 'KernelStack: 27280 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12617992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.877 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105089908 kB' 'MemAvailable: 108574436 kB' 'Buffers: 2704 kB' 'Cached: 14449768 kB' 'SwapCached: 0 kB' 'Active: 11509460 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035276 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583768 kB' 'Mapped: 175940 kB' 'Shmem: 10454840 kB' 'KReclaimable: 527840 kB' 'Slab: 1379992 kB' 'SReclaimable: 527840 kB' 'SUnreclaim: 852152 kB' 'KernelStack: 27376 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12633568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.878 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.879 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.880 nr_hugepages=1024 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.880 resv_hugepages=0 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.880 surplus_hugepages=0 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.880 anon_hugepages=0 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105092404 kB' 'MemAvailable: 108576932 kB' 'Buffers: 2704 kB' 'Cached: 14449792 kB' 'SwapCached: 0 kB' 'Active: 11507656 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033472 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581944 kB' 'Mapped: 175940 kB' 'Shmem: 10454864 kB' 'KReclaimable: 527840 kB' 'Slab: 1379972 kB' 'SReclaimable: 527840 kB' 'SUnreclaim: 852132 kB' 'KernelStack: 27216 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12616196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.880 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.881 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53218300 kB' 'MemUsed: 12440708 kB' 'SwapCached: 0 kB' 'Active: 4797060 kB' 'Inactive: 3299996 kB' 'Active(anon): 4644512 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7832844 kB' 'Mapped: 80008 kB' 'AnonPages: 267448 kB' 'Shmem: 4380300 kB' 'KernelStack: 15448 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394368 kB' 'Slab: 885084 kB' 'SReclaimable: 394368 kB' 'SUnreclaim: 490716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.882 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.883 node0=1024 expecting 1024 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.883 00:04:13.883 real 0m4.009s 00:04:13.883 user 0m1.504s 00:04:13.883 sys 0m2.526s 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.883 11:15:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:13.883 ************************************ 00:04:13.883 END TEST default_setup 00:04:13.883 ************************************ 00:04:13.883 11:15:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.883 11:15:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:13.883 11:15:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.883 11:15:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.883 11:15:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.144 ************************************ 00:04:14.144 START TEST per_node_1G_alloc 00:04:14.144 ************************************ 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.144 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.145 11:15:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.447 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:17.447 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105106276 kB' 'MemAvailable: 108590796 kB' 'Buffers: 2704 kB' 'Cached: 14449908 kB' 'SwapCached: 0 kB' 'Active: 11507636 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033452 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581784 kB' 'Mapped: 175348 kB' 'Shmem: 10454980 kB' 'KReclaimable: 527832 kB' 'Slab: 1378956 kB' 'SReclaimable: 527832 kB' 'SUnreclaim: 851124 kB' 'KernelStack: 27536 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12606736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.447 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.448 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105110036 kB' 'MemAvailable: 108594556 kB' 'Buffers: 2704 kB' 'Cached: 14449928 kB' 'SwapCached: 0 kB' 'Active: 11506916 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032732 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580952 kB' 'Mapped: 174960 kB' 'Shmem: 10455000 kB' 'KReclaimable: 527832 kB' 'Slab: 1378956 kB' 'SReclaimable: 527832 kB' 'SUnreclaim: 851124 kB' 'KernelStack: 27552 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12606760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.449 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.450 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.716 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105108128 kB' 'MemAvailable: 108592648 kB' 'Buffers: 2704 kB' 'Cached: 14449944 kB' 'SwapCached: 0 kB' 'Active: 11506596 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032412 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580600 kB' 'Mapped: 174968 kB' 'Shmem: 10455016 kB' 'KReclaimable: 527832 kB' 'Slab: 1378956 kB' 'SReclaimable: 527832 kB' 'SUnreclaim: 851124 kB' 'KernelStack: 27584 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.717 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.718 nr_hugepages=1024 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.718 resv_hugepages=0 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.718 surplus_hugepages=0 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.718 anon_hugepages=0 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.718 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105109352 kB' 'MemAvailable: 108593872 kB' 'Buffers: 2704 kB' 'Cached: 14449968 kB' 'SwapCached: 0 kB' 'Active: 11506468 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032284 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580452 kB' 'Mapped: 174960 kB' 'Shmem: 10455040 kB' 'KReclaimable: 527832 kB' 'Slab: 1378924 kB' 'SReclaimable: 527832 kB' 'SUnreclaim: 851092 kB' 'KernelStack: 27280 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.719 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.720 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54252328 kB' 'MemUsed: 11406680 kB' 'SwapCached: 0 kB' 'Active: 4795256 kB' 'Inactive: 3299996 kB' 'Active(anon): 4642708 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7832968 kB' 'Mapped: 79544 kB' 'AnonPages: 265464 kB' 'Shmem: 4380424 kB' 'KernelStack: 15432 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394360 kB' 'Slab: 884532 kB' 'SReclaimable: 394360 kB' 'SUnreclaim: 490172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.721 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50855224 kB' 'MemUsed: 9824648 kB' 'SwapCached: 0 kB' 'Active: 6711336 kB' 'Inactive: 223452 kB' 'Active(anon): 6389700 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619728 kB' 'Mapped: 95416 kB' 'AnonPages: 315116 kB' 'Shmem: 6074640 kB' 'KernelStack: 11960 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133472 kB' 'Slab: 494392 kB' 'SReclaimable: 133472 kB' 'SUnreclaim: 360920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.722 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.723 node0=512 expecting 512 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.723 node1=512 expecting 512 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.723 00:04:17.723 real 0m3.685s 00:04:17.723 user 0m1.440s 00:04:17.723 sys 0m2.294s 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.723 11:15:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.723 ************************************ 00:04:17.723 END TEST per_node_1G_alloc 00:04:17.723 ************************************ 00:04:17.723 11:15:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:17.723 11:15:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:17.723 11:15:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.723 11:15:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.723 11:15:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.723 ************************************ 00:04:17.723 START TEST even_2G_alloc 00:04:17.723 ************************************ 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.723 11:15:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.096 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:21.096 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105124244 kB' 'MemAvailable: 108608756 kB' 'Buffers: 2704 kB' 'Cached: 14450092 kB' 'SwapCached: 0 kB' 'Active: 11507404 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033220 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581472 kB' 'Mapped: 175004 kB' 'Shmem: 10455164 kB' 'KReclaimable: 527824 kB' 'Slab: 1379496 kB' 'SReclaimable: 527824 kB' 'SUnreclaim: 851672 kB' 'KernelStack: 27360 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607832 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235748 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.096 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.097 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105125400 kB' 'MemAvailable: 108609912 kB' 'Buffers: 2704 kB' 'Cached: 14450092 kB' 'SwapCached: 0 kB' 'Active: 11507168 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032984 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581120 kB' 'Mapped: 174996 kB' 'Shmem: 10455164 kB' 'KReclaimable: 527824 kB' 'Slab: 1379108 kB' 'SReclaimable: 527824 kB' 'SUnreclaim: 851284 kB' 'KernelStack: 27248 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12626028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235780 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.098 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.099 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105124628 kB' 'MemAvailable: 108609140 kB' 'Buffers: 2704 kB' 'Cached: 14450112 kB' 'SwapCached: 0 kB' 'Active: 11507272 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033088 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581172 kB' 'Mapped: 174996 kB' 'Shmem: 10455184 kB' 'KReclaimable: 527824 kB' 'Slab: 1379180 kB' 'SReclaimable: 527824 kB' 'SUnreclaim: 851356 kB' 'KernelStack: 27440 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12607500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.100 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.101 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.102 nr_hugepages=1024 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.102 resv_hugepages=0 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.102 surplus_hugepages=0 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.102 anon_hugepages=0 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105123756 kB' 'MemAvailable: 108608268 kB' 'Buffers: 2704 kB' 'Cached: 14450136 kB' 'SwapCached: 0 kB' 'Active: 11507600 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033416 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580996 kB' 'Mapped: 174996 kB' 'Shmem: 10455208 kB' 'KReclaimable: 527824 kB' 'Slab: 1379176 kB' 'SReclaimable: 527824 kB' 'SUnreclaim: 851352 kB' 'KernelStack: 27424 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12605920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235700 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.103 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54248760 kB' 'MemUsed: 11410248 kB' 'SwapCached: 0 kB' 'Active: 4795024 kB' 'Inactive: 3299996 kB' 'Active(anon): 4642476 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7833072 kB' 'Mapped: 79580 kB' 'AnonPages: 265140 kB' 'Shmem: 4380528 kB' 'KernelStack: 15368 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394360 kB' 'Slab: 884884 kB' 'SReclaimable: 394360 kB' 'SUnreclaim: 490524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50873484 kB' 'MemUsed: 9806388 kB' 'SwapCached: 0 kB' 'Active: 6711704 kB' 'Inactive: 223452 kB' 'Active(anon): 6390068 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619768 kB' 'Mapped: 95416 kB' 'AnonPages: 315468 kB' 'Shmem: 6074680 kB' 'KernelStack: 11800 kB' 'PageTables: 3336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133464 kB' 'Slab: 494292 kB' 'SReclaimable: 133464 kB' 'SUnreclaim: 360828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.107 node0=512 expecting 512 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:21.107 node1=512 expecting 512 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:21.107 00:04:21.107 real 0m3.311s 00:04:21.107 user 0m1.210s 00:04:21.107 sys 0m2.073s 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.107 11:15:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.107 ************************************ 00:04:21.107 END TEST even_2G_alloc 00:04:21.107 ************************************ 00:04:21.107 11:15:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:21.107 11:15:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:21.107 11:15:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.107 11:15:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.107 11:15:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.107 ************************************ 00:04:21.107 START TEST odd_alloc 00:04:21.107 ************************************ 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.107 11:15:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.411 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:24.411 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:24.411 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105115500 kB' 'MemAvailable: 108599972 kB' 'Buffers: 2704 kB' 'Cached: 14450272 kB' 'SwapCached: 0 kB' 'Active: 11507512 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033328 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580312 kB' 'Mapped: 175100 kB' 'Shmem: 10455344 kB' 'KReclaimable: 527784 kB' 'Slab: 1379368 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851584 kB' 'KernelStack: 27600 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12608548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235796 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.672 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105115412 kB' 'MemAvailable: 108599884 kB' 'Buffers: 2704 kB' 'Cached: 14450292 kB' 'SwapCached: 0 kB' 'Active: 11507532 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033348 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580716 kB' 'Mapped: 175084 kB' 'Shmem: 10455364 kB' 'KReclaimable: 527784 kB' 'Slab: 1379360 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851576 kB' 'KernelStack: 27568 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12626536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.673 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.674 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.936 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105118152 kB' 'MemAvailable: 108602624 kB' 'Buffers: 2704 kB' 'Cached: 14450304 kB' 'SwapCached: 0 kB' 'Active: 11505936 kB' 'Inactive: 3523448 kB' 'Active(anon): 11031752 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579588 kB' 'Mapped: 175008 kB' 'Shmem: 10455376 kB' 'KReclaimable: 527784 kB' 'Slab: 1379384 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851600 kB' 'KernelStack: 27360 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12606236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.937 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.938 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.939 nr_hugepages=1025 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.939 resv_hugepages=0 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.939 surplus_hugepages=0 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.939 anon_hugepages=0 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105118892 kB' 'MemAvailable: 108603364 kB' 'Buffers: 2704 kB' 'Cached: 14450328 kB' 'SwapCached: 0 kB' 'Active: 11506068 kB' 'Inactive: 3523448 kB' 'Active(anon): 11031884 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579780 kB' 'Mapped: 174972 kB' 'Shmem: 10455400 kB' 'KReclaimable: 527784 kB' 'Slab: 1379296 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851512 kB' 'KernelStack: 27344 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12606256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.939 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.940 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54251136 kB' 'MemUsed: 11407872 kB' 'SwapCached: 0 kB' 'Active: 4793384 kB' 'Inactive: 3299996 kB' 'Active(anon): 4640836 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7833220 kB' 'Mapped: 79592 kB' 'AnonPages: 263352 kB' 'Shmem: 4380676 kB' 'KernelStack: 15416 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394352 kB' 'Slab: 884896 kB' 'SReclaimable: 394352 kB' 'SUnreclaim: 490544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.941 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50869812 kB' 'MemUsed: 9810060 kB' 'SwapCached: 0 kB' 'Active: 6712872 kB' 'Inactive: 223452 kB' 'Active(anon): 6391236 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619836 kB' 'Mapped: 95380 kB' 'AnonPages: 316712 kB' 'Shmem: 6074748 kB' 'KernelStack: 11928 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133432 kB' 'Slab: 494400 kB' 'SReclaimable: 133432 kB' 'SUnreclaim: 360968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.942 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:24.943 node0=512 expecting 513 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:24.943 node1=513 expecting 512 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:24.943 00:04:24.943 real 0m3.749s 00:04:24.943 user 0m1.457s 00:04:24.943 sys 0m2.313s 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.943 11:15:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.943 ************************************ 00:04:24.943 END TEST odd_alloc 00:04:24.943 ************************************ 00:04:24.943 11:15:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:24.943 11:15:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.943 11:15:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.943 11:15:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.943 11:15:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.943 ************************************ 00:04:24.943 START TEST custom_alloc 00:04:24.944 ************************************ 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.944 11:15:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.251 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:28.251 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:28.251 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:28.252 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:28.252 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:28.252 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:28.252 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:28.252 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104110496 kB' 'MemAvailable: 107594968 kB' 'Buffers: 2704 kB' 'Cached: 14450444 kB' 'SwapCached: 0 kB' 'Active: 11507460 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033276 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580464 kB' 'Mapped: 175132 kB' 'Shmem: 10455516 kB' 'KReclaimable: 527784 kB' 'Slab: 1379216 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851432 kB' 'KernelStack: 27456 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12609800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.252 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104107700 kB' 'MemAvailable: 107592172 kB' 'Buffers: 2704 kB' 'Cached: 14450448 kB' 'SwapCached: 0 kB' 'Active: 11507748 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033564 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580748 kB' 'Mapped: 175084 kB' 'Shmem: 10455520 kB' 'KReclaimable: 527784 kB' 'Slab: 1379216 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851432 kB' 'KernelStack: 27536 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12609820 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235716 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.253 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.254 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104103968 kB' 'MemAvailable: 107588440 kB' 'Buffers: 2704 kB' 'Cached: 14450464 kB' 'SwapCached: 0 kB' 'Active: 11507484 kB' 'Inactive: 3523448 kB' 'Active(anon): 11033300 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580988 kB' 'Mapped: 175008 kB' 'Shmem: 10455536 kB' 'KReclaimable: 527784 kB' 'Slab: 1379180 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851396 kB' 'KernelStack: 27488 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12606040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235732 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:28.257 nr_hugepages=1536 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.257 resv_hugepages=0 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.257 surplus_hugepages=0 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.257 anon_hugepages=0 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104106324 kB' 'MemAvailable: 107590796 kB' 'Buffers: 2704 kB' 'Cached: 14450488 kB' 'SwapCached: 0 kB' 'Active: 11506408 kB' 'Inactive: 3523448 kB' 'Active(anon): 11032224 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579924 kB' 'Mapped: 175000 kB' 'Shmem: 10455560 kB' 'KReclaimable: 527784 kB' 'Slab: 1379164 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851380 kB' 'KernelStack: 27216 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12606560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.257 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54275644 kB' 'MemUsed: 11383364 kB' 'SwapCached: 0 kB' 'Active: 4793360 kB' 'Inactive: 3299996 kB' 'Active(anon): 4640812 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7833336 kB' 'Mapped: 79612 kB' 'AnonPages: 263152 kB' 'Shmem: 4380792 kB' 'KernelStack: 15384 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394352 kB' 'Slab: 884332 kB' 'SReclaimable: 394352 kB' 'SUnreclaim: 489980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 49831024 kB' 'MemUsed: 10848848 kB' 'SwapCached: 0 kB' 'Active: 6712924 kB' 'Inactive: 223452 kB' 'Active(anon): 6391288 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6619872 kB' 'Mapped: 95372 kB' 'AnonPages: 316620 kB' 'Shmem: 6074784 kB' 'KernelStack: 11848 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 133432 kB' 'Slab: 494432 kB' 'SReclaimable: 133432 kB' 'SUnreclaim: 361000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.260 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:28.262 node0=512 expecting 512 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:28.262 node1=1024 expecting 1024 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:28.262 00:04:28.262 real 0m3.328s 00:04:28.262 user 0m1.196s 00:04:28.262 sys 0m2.060s 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.262 11:15:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.262 ************************************ 00:04:28.262 END TEST custom_alloc 00:04:28.262 ************************************ 00:04:28.523 11:15:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:28.523 11:15:56 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:28.523 11:15:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.523 11:15:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.523 11:15:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.523 ************************************ 00:04:28.523 START TEST no_shrink_alloc 00:04:28.523 ************************************ 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.523 11:15:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.828 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:31.828 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:31.828 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.091 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105127960 kB' 'MemAvailable: 108612432 kB' 'Buffers: 2704 kB' 'Cached: 14450636 kB' 'SwapCached: 0 kB' 'Active: 11510392 kB' 'Inactive: 3523448 kB' 'Active(anon): 11036208 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583428 kB' 'Mapped: 175160 kB' 'Shmem: 10455708 kB' 'KReclaimable: 527784 kB' 'Slab: 1378756 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 850972 kB' 'KernelStack: 27328 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12608028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.092 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105131048 kB' 'MemAvailable: 108615520 kB' 'Buffers: 2704 kB' 'Cached: 14450640 kB' 'SwapCached: 0 kB' 'Active: 11509800 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035616 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582800 kB' 'Mapped: 175096 kB' 'Shmem: 10455712 kB' 'KReclaimable: 527784 kB' 'Slab: 1378728 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 850944 kB' 'KernelStack: 27264 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12608048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.093 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105131400 kB' 'MemAvailable: 108615872 kB' 'Buffers: 2704 kB' 'Cached: 14450656 kB' 'SwapCached: 0 kB' 'Active: 11509332 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035148 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582844 kB' 'Mapped: 175020 kB' 'Shmem: 10455728 kB' 'KReclaimable: 527784 kB' 'Slab: 1378748 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 850964 kB' 'KernelStack: 27312 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12608068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.094 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.095 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.096 nr_hugepages=1024 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.096 resv_hugepages=0 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.096 surplus_hugepages=0 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.096 anon_hugepages=0 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.096 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105129720 kB' 'MemAvailable: 108614192 kB' 'Buffers: 2704 kB' 'Cached: 14450696 kB' 'SwapCached: 0 kB' 'Active: 11508872 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034688 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582312 kB' 'Mapped: 175020 kB' 'Shmem: 10455768 kB' 'KReclaimable: 527784 kB' 'Slab: 1378748 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 850964 kB' 'KernelStack: 27296 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12608092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.357 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.358 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53207168 kB' 'MemUsed: 12451840 kB' 'SwapCached: 0 kB' 'Active: 4796088 kB' 'Inactive: 3299996 kB' 'Active(anon): 4643540 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7833424 kB' 'Mapped: 79648 kB' 'AnonPages: 265956 kB' 'Shmem: 4380880 kB' 'KernelStack: 15464 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394352 kB' 'Slab: 884244 kB' 'SReclaimable: 394352 kB' 'SUnreclaim: 489892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.359 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.360 node0=1024 expecting 1024 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.360 11:16:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.657 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:35.657 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:35.657 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:35.922 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105155224 kB' 'MemAvailable: 108639696 kB' 'Buffers: 2704 kB' 'Cached: 14450788 kB' 'SwapCached: 0 kB' 'Active: 11509264 kB' 'Inactive: 3523448 kB' 'Active(anon): 11035080 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582608 kB' 'Mapped: 175104 kB' 'Shmem: 10455860 kB' 'KReclaimable: 527784 kB' 'Slab: 1379148 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851364 kB' 'KernelStack: 27312 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.922 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.923 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105154652 kB' 'MemAvailable: 108639124 kB' 'Buffers: 2704 kB' 'Cached: 14450792 kB' 'SwapCached: 0 kB' 'Active: 11509136 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034952 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582440 kB' 'Mapped: 175040 kB' 'Shmem: 10455864 kB' 'KReclaimable: 527784 kB' 'Slab: 1379164 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851380 kB' 'KernelStack: 27312 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.924 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.925 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105156948 kB' 'MemAvailable: 108641420 kB' 'Buffers: 2704 kB' 'Cached: 14450812 kB' 'SwapCached: 0 kB' 'Active: 11509156 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034972 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582440 kB' 'Mapped: 175040 kB' 'Shmem: 10455884 kB' 'KReclaimable: 527784 kB' 'Slab: 1379132 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851348 kB' 'KernelStack: 27312 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.926 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.927 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.928 nr_hugepages=1024 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.928 resv_hugepages=0 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.928 surplus_hugepages=0 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.928 anon_hugepages=0 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.928 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105156948 kB' 'MemAvailable: 108641420 kB' 'Buffers: 2704 kB' 'Cached: 14450852 kB' 'SwapCached: 0 kB' 'Active: 11508824 kB' 'Inactive: 3523448 kB' 'Active(anon): 11034640 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582044 kB' 'Mapped: 175040 kB' 'Shmem: 10455924 kB' 'KReclaimable: 527784 kB' 'Slab: 1379132 kB' 'SReclaimable: 527784 kB' 'SUnreclaim: 851348 kB' 'KernelStack: 27296 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12609304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4281716 kB' 'DirectMap2M: 28952576 kB' 'DirectMap1G: 102760448 kB' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.929 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53209684 kB' 'MemUsed: 12449324 kB' 'SwapCached: 0 kB' 'Active: 4796536 kB' 'Inactive: 3299996 kB' 'Active(anon): 4643988 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7833476 kB' 'Mapped: 79668 kB' 'AnonPages: 266308 kB' 'Shmem: 4380932 kB' 'KernelStack: 15464 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394352 kB' 'Slab: 884620 kB' 'SReclaimable: 394352 kB' 'SUnreclaim: 490268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.930 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.931 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.192 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.193 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.193 node0=1024 expecting 1024 00:04:36.193 11:16:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.193 00:04:36.193 real 0m7.623s 00:04:36.193 user 0m3.014s 00:04:36.193 sys 0m4.723s 00:04:36.193 11:16:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.193 11:16:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.193 ************************************ 00:04:36.193 END TEST no_shrink_alloc 00:04:36.193 ************************************ 00:04:36.193 11:16:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.193 11:16:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.193 00:04:36.193 real 0m26.341s 00:04:36.193 user 0m10.089s 00:04:36.193 sys 0m16.387s 00:04:36.193 11:16:04 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.193 11:16:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.193 ************************************ 00:04:36.193 END TEST hugepages 00:04:36.193 ************************************ 00:04:36.193 11:16:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:36.193 11:16:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:36.193 11:16:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.193 11:16:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.193 11:16:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.193 ************************************ 00:04:36.193 START TEST driver 00:04:36.193 ************************************ 00:04:36.193 11:16:04 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:36.193 * Looking for test storage... 00:04:36.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.193 11:16:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:36.193 11:16:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.193 11:16:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.549 11:16:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:41.549 11:16:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.549 11:16:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.549 11:16:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.549 ************************************ 00:04:41.549 START TEST guess_driver 00:04:41.549 ************************************ 00:04:41.549 11:16:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:41.549 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:41.549 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:41.550 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:41.550 Looking for driver=vfio-pci 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.550 11:16:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.853 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.114 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:45.114 11:16:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:45.114 11:16:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.114 11:16:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.405 00:04:50.405 real 0m8.775s 00:04:50.405 user 0m2.892s 00:04:50.405 sys 0m5.060s 00:04:50.405 11:16:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.405 11:16:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.405 ************************************ 00:04:50.405 END TEST guess_driver 00:04:50.405 ************************************ 00:04:50.405 11:16:18 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:50.405 00:04:50.405 real 0m13.851s 00:04:50.405 user 0m4.360s 00:04:50.405 sys 0m7.846s 00:04:50.405 11:16:18 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.405 11:16:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.405 ************************************ 00:04:50.405 END TEST driver 00:04:50.405 ************************************ 00:04:50.405 11:16:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:50.405 11:16:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.405 11:16:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.405 11:16:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.405 11:16:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.405 ************************************ 00:04:50.405 START TEST devices 00:04:50.405 ************************************ 00:04:50.405 11:16:18 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.405 * Looking for test storage... 00:04:50.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.405 11:16:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.405 11:16:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:50.405 11:16:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.405 11:16:18 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.611 11:16:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:54.611 11:16:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:54.611 11:16:22 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:54.611 11:16:22 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:54.611 No valid GPT data, bailing 00:04:54.611 11:16:22 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.611 11:16:22 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:54.612 11:16:22 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:54.612 11:16:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:54.612 11:16:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:54.612 11:16:22 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:54.612 11:16:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:54.612 11:16:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.612 11:16:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.612 11:16:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.612 ************************************ 00:04:54.612 START TEST nvme_mount 00:04:54.612 ************************************ 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.612 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:55.181 Creating new GPT entries in memory. 00:04:55.181 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.181 other utilities. 00:04:55.181 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.181 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.181 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.181 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.181 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:56.565 Creating new GPT entries in memory. 00:04:56.565 The operation has completed successfully. 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3313220 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.565 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:56.566 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.566 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.566 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.870 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.870 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:00.130 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:00.130 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:00.130 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:00.130 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:00.130 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:00.130 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:00.130 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.391 11:16:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.689 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.950 11:16:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:07.247 11:16:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.553 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.553 00:05:07.553 real 0m13.250s 00:05:07.553 user 0m4.087s 00:05:07.553 sys 0m7.013s 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.553 11:16:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:07.553 ************************************ 00:05:07.553 END TEST nvme_mount 00:05:07.553 ************************************ 00:05:07.553 11:16:36 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:07.553 11:16:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:07.553 11:16:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.553 11:16:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.553 11:16:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:07.553 ************************************ 00:05:07.553 START TEST dm_mount 00:05:07.553 ************************************ 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.553 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:07.554 11:16:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:08.495 Creating new GPT entries in memory. 00:05:08.495 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.495 other utilities. 00:05:08.495 11:16:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.495 11:16:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.495 11:16:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.495 11:16:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.495 11:16:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:09.878 Creating new GPT entries in memory. 00:05:09.878 The operation has completed successfully. 00:05:09.878 11:16:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:09.878 11:16:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.878 11:16:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.878 11:16:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.878 11:16:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:10.854 The operation has completed successfully. 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3318215 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.854 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.855 11:16:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.398 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.658 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.919 11:16:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.215 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:17.216 11:16:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.476 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.476 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.476 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:17.476 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:17.476 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:17.736 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:17.736 00:05:17.736 real 0m10.075s 00:05:17.736 user 0m2.572s 00:05:17.736 sys 0m4.436s 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.736 11:16:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.736 ************************************ 00:05:17.736 END TEST dm_mount 00:05:17.736 ************************************ 00:05:17.736 11:16:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.736 11:16:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.997 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:17.997 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:17.997 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.997 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.997 11:16:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:17.997 00:05:17.997 real 0m27.901s 00:05:17.997 user 0m8.249s 00:05:17.997 sys 0m14.304s 00:05:17.997 11:16:46 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.997 11:16:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.997 ************************************ 00:05:17.997 END TEST devices 00:05:17.997 ************************************ 00:05:17.997 11:16:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.997 00:05:17.997 real 1m34.437s 00:05:17.997 user 0m31.586s 00:05:17.997 sys 0m53.789s 00:05:17.997 11:16:46 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.997 11:16:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.997 ************************************ 00:05:17.997 END TEST setup.sh 00:05:17.997 ************************************ 00:05:17.997 11:16:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.997 11:16:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:21.289 Hugepages 00:05:21.289 node hugesize free / total 00:05:21.289 node0 1048576kB 0 / 0 00:05:21.289 node0 2048kB 2048 / 2048 00:05:21.289 node1 1048576kB 0 / 0 00:05:21.289 node1 2048kB 0 / 0 00:05:21.289 00:05:21.290 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.290 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:21.290 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:21.549 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:21.549 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:21.550 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:21.550 11:16:50 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.550 11:16:50 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.550 11:16:50 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.550 11:16:50 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:24.843 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:24.843 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:25.103 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:25.103 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:27.011 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:27.011 11:16:55 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:27.951 11:16:56 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:27.951 11:16:56 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:27.951 11:16:56 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:27.951 11:16:56 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:27.951 11:16:56 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:27.951 11:16:56 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:27.951 11:16:56 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.951 11:16:56 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:27.951 11:16:56 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:28.216 11:16:56 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:28.216 11:16:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:28.216 11:16:56 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:31.513 Waiting for block devices as requested 00:05:31.513 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:31.513 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:31.513 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:31.772 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:31.772 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:31.772 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:31.772 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:32.032 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:32.032 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:32.292 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:32.292 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:32.292 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:32.553 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:32.553 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:32.553 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:32.553 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:32.812 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:33.071 11:17:01 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:33.071 11:17:01 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:33.071 11:17:01 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:33.071 11:17:01 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:33.071 11:17:01 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:33.071 11:17:01 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:33.071 11:17:01 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:33.071 11:17:01 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:33.071 11:17:01 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:33.071 11:17:01 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:33.071 11:17:01 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:33.071 11:17:01 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:33.071 11:17:01 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:33.071 11:17:01 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:33.071 11:17:01 -- common/autotest_common.sh@1557 -- # continue 00:05:33.071 11:17:01 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:33.071 11:17:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.071 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 11:17:01 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:33.071 11:17:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.071 11:17:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.071 11:17:01 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:36.366 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:36.366 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:36.366 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:36.367 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:36.627 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:36.887 11:17:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:36.887 11:17:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.887 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:36.887 11:17:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:36.887 11:17:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:36.887 11:17:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.887 11:17:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:36.887 11:17:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:36.887 11:17:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:36.887 11:17:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:36.887 11:17:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:36.887 11:17:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.887 11:17:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.887 11:17:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:36.887 11:17:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:36.887 11:17:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:36.887 11:17:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:36.887 11:17:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:36.887 11:17:05 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:36.887 11:17:05 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:36.887 11:17:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:36.887 11:17:05 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:36.887 11:17:05 -- common/autotest_common.sh@1593 -- # return 0 00:05:36.887 11:17:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:36.887 11:17:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:36.887 11:17:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:36.887 11:17:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:36.887 11:17:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:36.887 11:17:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.887 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:36.887 11:17:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:36.887 11:17:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:36.887 11:17:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.887 11:17:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.887 11:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:36.887 ************************************ 00:05:36.887 START TEST env 00:05:36.887 ************************************ 00:05:36.887 11:17:05 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:37.148 * Looking for test storage... 00:05:37.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:37.148 11:17:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.148 11:17:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.148 11:17:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.148 11:17:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.148 ************************************ 00:05:37.148 START TEST env_memory 00:05:37.148 ************************************ 00:05:37.148 11:17:05 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.148 00:05:37.148 00:05:37.148 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.148 http://cunit.sourceforge.net/ 00:05:37.148 00:05:37.148 00:05:37.148 Suite: memory 00:05:37.148 Test: alloc and free memory map ...[2024-07-15 11:17:05.754634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.148 passed 00:05:37.148 Test: mem map translation ...[2024-07-15 11:17:05.779998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.148 [2024-07-15 11:17:05.780023] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.148 [2024-07-15 11:17:05.780069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.148 [2024-07-15 11:17:05.780075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.148 passed 00:05:37.148 Test: mem map registration ...[2024-07-15 11:17:05.835265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.148 [2024-07-15 11:17:05.835284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.409 passed 00:05:37.409 Test: mem map adjacent registrations ...passed 00:05:37.409 00:05:37.409 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.409 suites 1 1 n/a 0 0 00:05:37.409 tests 4 4 4 0 0 00:05:37.409 asserts 152 152 152 0 n/a 00:05:37.409 00:05:37.409 Elapsed time = 0.192 seconds 00:05:37.409 00:05:37.409 real 0m0.206s 00:05:37.409 user 0m0.196s 00:05:37.409 sys 0m0.008s 00:05:37.409 11:17:05 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.409 11:17:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.409 ************************************ 00:05:37.409 END TEST env_memory 00:05:37.409 ************************************ 00:05:37.409 11:17:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:37.409 11:17:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.409 11:17:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.409 11:17:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.409 11:17:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.409 ************************************ 00:05:37.409 START TEST env_vtophys 00:05:37.409 ************************************ 00:05:37.409 11:17:05 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.409 EAL: lib.eal log level changed from notice to debug 00:05:37.409 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.409 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.409 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.409 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.409 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.409 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.409 EAL: Detected lcore 6 as core 6 on socket 0 00:05:37.409 EAL: Detected lcore 7 as core 7 on socket 0 00:05:37.409 EAL: Detected lcore 8 as core 8 on socket 0 00:05:37.409 EAL: Detected lcore 9 as core 9 on socket 0 00:05:37.409 EAL: Detected lcore 10 as core 10 on socket 0 00:05:37.409 EAL: Detected lcore 11 as core 11 on socket 0 00:05:37.409 EAL: Detected lcore 12 as core 12 on socket 0 00:05:37.409 EAL: Detected lcore 13 as core 13 on socket 0 00:05:37.409 EAL: Detected lcore 14 as core 14 on socket 0 00:05:37.409 EAL: Detected lcore 15 as core 15 on socket 0 00:05:37.409 EAL: Detected lcore 16 as core 16 on socket 0 00:05:37.409 EAL: Detected lcore 17 as core 17 on socket 0 00:05:37.409 EAL: Detected lcore 18 as core 18 on socket 0 00:05:37.409 EAL: Detected lcore 19 as core 19 on socket 0 00:05:37.409 EAL: Detected lcore 20 as core 20 on socket 0 00:05:37.409 EAL: Detected lcore 21 as core 21 on socket 0 00:05:37.409 EAL: Detected lcore 22 as core 22 on socket 0 00:05:37.409 EAL: Detected lcore 23 as core 23 on socket 0 00:05:37.409 EAL: Detected lcore 24 as core 24 on socket 0 00:05:37.409 EAL: Detected lcore 25 as core 25 on socket 0 00:05:37.409 EAL: Detected lcore 26 as core 26 on socket 0 00:05:37.409 EAL: Detected lcore 27 as core 27 on socket 0 00:05:37.409 EAL: Detected lcore 28 as core 28 on socket 0 00:05:37.409 EAL: Detected lcore 29 as core 29 on socket 0 00:05:37.409 EAL: Detected lcore 30 as core 30 on socket 0 00:05:37.409 EAL: Detected lcore 31 as core 31 on socket 0 00:05:37.409 EAL: Detected lcore 32 as core 32 on socket 0 00:05:37.409 EAL: Detected lcore 33 as core 33 on socket 0 00:05:37.409 EAL: Detected lcore 34 as core 34 on socket 0 00:05:37.409 EAL: Detected lcore 35 as core 35 on socket 0 00:05:37.409 EAL: Detected lcore 36 as core 0 on socket 1 00:05:37.409 EAL: Detected lcore 37 as core 1 on socket 1 00:05:37.409 EAL: Detected lcore 38 as core 2 on socket 1 00:05:37.409 EAL: Detected lcore 39 as core 3 on socket 1 00:05:37.409 EAL: Detected lcore 40 as core 4 on socket 1 00:05:37.409 EAL: Detected lcore 41 as core 5 on socket 1 00:05:37.409 EAL: Detected lcore 42 as core 6 on socket 1 00:05:37.409 EAL: Detected lcore 43 as core 7 on socket 1 00:05:37.409 EAL: Detected lcore 44 as core 8 on socket 1 00:05:37.409 EAL: Detected lcore 45 as core 9 on socket 1 00:05:37.409 EAL: Detected lcore 46 as core 10 on socket 1 00:05:37.409 EAL: Detected lcore 47 as core 11 on socket 1 00:05:37.409 EAL: Detected lcore 48 as core 12 on socket 1 00:05:37.409 EAL: Detected lcore 49 as core 13 on socket 1 00:05:37.409 EAL: Detected lcore 50 as core 14 on socket 1 00:05:37.409 EAL: Detected lcore 51 as core 15 on socket 1 00:05:37.409 EAL: Detected lcore 52 as core 16 on socket 1 00:05:37.409 EAL: Detected lcore 53 as core 17 on socket 1 00:05:37.409 EAL: Detected lcore 54 as core 18 on socket 1 00:05:37.409 EAL: Detected lcore 55 as core 19 on socket 1 00:05:37.409 EAL: Detected lcore 56 as core 20 on socket 1 00:05:37.409 EAL: Detected lcore 57 as core 21 on socket 1 00:05:37.409 EAL: Detected lcore 58 as core 22 on socket 1 00:05:37.409 EAL: Detected lcore 59 as core 23 on socket 1 00:05:37.409 EAL: Detected lcore 60 as core 24 on socket 1 00:05:37.409 EAL: Detected lcore 61 as core 25 on socket 1 00:05:37.409 EAL: Detected lcore 62 as core 26 on socket 1 00:05:37.409 EAL: Detected lcore 63 as core 27 on socket 1 00:05:37.410 EAL: Detected lcore 64 as core 28 on socket 1 00:05:37.410 EAL: Detected lcore 65 as core 29 on socket 1 00:05:37.410 EAL: Detected lcore 66 as core 30 on socket 1 00:05:37.410 EAL: Detected lcore 67 as core 31 on socket 1 00:05:37.410 EAL: Detected lcore 68 as core 32 on socket 1 00:05:37.410 EAL: Detected lcore 69 as core 33 on socket 1 00:05:37.410 EAL: Detected lcore 70 as core 34 on socket 1 00:05:37.410 EAL: Detected lcore 71 as core 35 on socket 1 00:05:37.410 EAL: Detected lcore 72 as core 0 on socket 0 00:05:37.410 EAL: Detected lcore 73 as core 1 on socket 0 00:05:37.410 EAL: Detected lcore 74 as core 2 on socket 0 00:05:37.410 EAL: Detected lcore 75 as core 3 on socket 0 00:05:37.410 EAL: Detected lcore 76 as core 4 on socket 0 00:05:37.410 EAL: Detected lcore 77 as core 5 on socket 0 00:05:37.410 EAL: Detected lcore 78 as core 6 on socket 0 00:05:37.410 EAL: Detected lcore 79 as core 7 on socket 0 00:05:37.410 EAL: Detected lcore 80 as core 8 on socket 0 00:05:37.410 EAL: Detected lcore 81 as core 9 on socket 0 00:05:37.410 EAL: Detected lcore 82 as core 10 on socket 0 00:05:37.410 EAL: Detected lcore 83 as core 11 on socket 0 00:05:37.410 EAL: Detected lcore 84 as core 12 on socket 0 00:05:37.410 EAL: Detected lcore 85 as core 13 on socket 0 00:05:37.410 EAL: Detected lcore 86 as core 14 on socket 0 00:05:37.410 EAL: Detected lcore 87 as core 15 on socket 0 00:05:37.410 EAL: Detected lcore 88 as core 16 on socket 0 00:05:37.410 EAL: Detected lcore 89 as core 17 on socket 0 00:05:37.410 EAL: Detected lcore 90 as core 18 on socket 0 00:05:37.410 EAL: Detected lcore 91 as core 19 on socket 0 00:05:37.410 EAL: Detected lcore 92 as core 20 on socket 0 00:05:37.410 EAL: Detected lcore 93 as core 21 on socket 0 00:05:37.410 EAL: Detected lcore 94 as core 22 on socket 0 00:05:37.410 EAL: Detected lcore 95 as core 23 on socket 0 00:05:37.410 EAL: Detected lcore 96 as core 24 on socket 0 00:05:37.410 EAL: Detected lcore 97 as core 25 on socket 0 00:05:37.410 EAL: Detected lcore 98 as core 26 on socket 0 00:05:37.410 EAL: Detected lcore 99 as core 27 on socket 0 00:05:37.410 EAL: Detected lcore 100 as core 28 on socket 0 00:05:37.410 EAL: Detected lcore 101 as core 29 on socket 0 00:05:37.410 EAL: Detected lcore 102 as core 30 on socket 0 00:05:37.410 EAL: Detected lcore 103 as core 31 on socket 0 00:05:37.410 EAL: Detected lcore 104 as core 32 on socket 0 00:05:37.410 EAL: Detected lcore 105 as core 33 on socket 0 00:05:37.410 EAL: Detected lcore 106 as core 34 on socket 0 00:05:37.410 EAL: Detected lcore 107 as core 35 on socket 0 00:05:37.410 EAL: Detected lcore 108 as core 0 on socket 1 00:05:37.410 EAL: Detected lcore 109 as core 1 on socket 1 00:05:37.410 EAL: Detected lcore 110 as core 2 on socket 1 00:05:37.410 EAL: Detected lcore 111 as core 3 on socket 1 00:05:37.410 EAL: Detected lcore 112 as core 4 on socket 1 00:05:37.410 EAL: Detected lcore 113 as core 5 on socket 1 00:05:37.410 EAL: Detected lcore 114 as core 6 on socket 1 00:05:37.410 EAL: Detected lcore 115 as core 7 on socket 1 00:05:37.410 EAL: Detected lcore 116 as core 8 on socket 1 00:05:37.410 EAL: Detected lcore 117 as core 9 on socket 1 00:05:37.410 EAL: Detected lcore 118 as core 10 on socket 1 00:05:37.410 EAL: Detected lcore 119 as core 11 on socket 1 00:05:37.410 EAL: Detected lcore 120 as core 12 on socket 1 00:05:37.410 EAL: Detected lcore 121 as core 13 on socket 1 00:05:37.410 EAL: Detected lcore 122 as core 14 on socket 1 00:05:37.410 EAL: Detected lcore 123 as core 15 on socket 1 00:05:37.410 EAL: Detected lcore 124 as core 16 on socket 1 00:05:37.410 EAL: Detected lcore 125 as core 17 on socket 1 00:05:37.410 EAL: Detected lcore 126 as core 18 on socket 1 00:05:37.410 EAL: Detected lcore 127 as core 19 on socket 1 00:05:37.410 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:37.410 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:37.410 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:37.410 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:37.410 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:37.410 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:37.410 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:37.410 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:37.410 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:37.410 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:37.410 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:37.410 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:37.410 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:37.410 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:37.410 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:37.410 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:37.410 EAL: Maximum logical cores by configuration: 128 00:05:37.410 EAL: Detected CPU lcores: 128 00:05:37.410 EAL: Detected NUMA nodes: 2 00:05:37.410 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:37.410 EAL: Detected shared linkage of DPDK 00:05:37.410 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.410 EAL: Bus pci wants IOVA as 'DC' 00:05:37.410 EAL: Buses did not request a specific IOVA mode. 00:05:37.410 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.410 EAL: Selected IOVA mode 'VA' 00:05:37.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.410 EAL: Probing VFIO support... 00:05:37.410 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.410 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.410 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.410 EAL: VFIO support initialized 00:05:37.410 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.410 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.410 EAL: Setting up physically contiguous memory... 00:05:37.410 EAL: Setting maximum number of open files to 524288 00:05:37.410 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.410 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.410 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.410 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.410 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.410 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.410 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.410 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.410 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.410 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.410 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.410 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.410 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.410 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.410 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.410 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.410 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.410 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.410 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.410 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.411 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.411 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.411 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.411 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.411 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.411 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.411 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.411 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.411 EAL: Hugepages will be freed exactly as allocated. 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: TSC frequency is ~2400000 KHz 00:05:37.411 EAL: Main lcore 0 is ready (tid=7fd27ecd1a00;cpuset=[0]) 00:05:37.411 EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.411 EAL: Restoring previous memory policy: 0 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.411 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.411 00:05:37.411 00:05:37.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.411 http://cunit.sourceforge.net/ 00:05:37.411 00:05:37.411 00:05:37.411 Suite: components_suite 00:05:37.411 Test: vtophys_malloc_test ...passed 00:05:37.411 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.411 EAL: Restoring previous memory policy: 4 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.411 EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.411 EAL: Restoring previous memory policy: 4 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.411 EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.411 EAL: Restoring previous memory policy: 4 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.411 EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.411 EAL: Restoring previous memory policy: 4 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.411 EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.411 EAL: Restoring previous memory policy: 4 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.411 EAL: request: mp_malloc_sync 00:05:37.411 EAL: No shared files mode enabled, IPC is disabled 00:05:37.411 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.411 EAL: Trying to obtain current memory policy. 00:05:37.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.671 EAL: Restoring previous memory policy: 4 00:05:37.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.671 EAL: request: mp_malloc_sync 00:05:37.671 EAL: No shared files mode enabled, IPC is disabled 00:05:37.671 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.671 EAL: request: mp_malloc_sync 00:05:37.671 EAL: No shared files mode enabled, IPC is disabled 00:05:37.671 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.671 EAL: Trying to obtain current memory policy. 00:05:37.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.671 EAL: Restoring previous memory policy: 4 00:05:37.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.671 EAL: request: mp_malloc_sync 00:05:37.671 EAL: No shared files mode enabled, IPC is disabled 00:05:37.671 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.671 EAL: request: mp_malloc_sync 00:05:37.671 EAL: No shared files mode enabled, IPC is disabled 00:05:37.671 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.671 EAL: Trying to obtain current memory policy. 00:05:37.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.671 EAL: Restoring previous memory policy: 4 00:05:37.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.672 EAL: request: mp_malloc_sync 00:05:37.672 EAL: No shared files mode enabled, IPC is disabled 00:05:37.672 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.672 EAL: request: mp_malloc_sync 00:05:37.672 EAL: No shared files mode enabled, IPC is disabled 00:05:37.672 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.672 EAL: Trying to obtain current memory policy. 00:05:37.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.672 EAL: Restoring previous memory policy: 4 00:05:37.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.672 EAL: request: mp_malloc_sync 00:05:37.672 EAL: No shared files mode enabled, IPC is disabled 00:05:37.672 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.931 EAL: request: mp_malloc_sync 00:05:37.931 EAL: No shared files mode enabled, IPC is disabled 00:05:37.931 EAL: Heap on socket 0 was shrunk by 514MB 00:05:37.931 EAL: Trying to obtain current memory policy. 00:05:37.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.931 EAL: Restoring previous memory policy: 4 00:05:37.931 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.931 EAL: request: mp_malloc_sync 00:05:37.931 EAL: No shared files mode enabled, IPC is disabled 00:05:37.931 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.191 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.191 EAL: request: mp_malloc_sync 00:05:38.191 EAL: No shared files mode enabled, IPC is disabled 00:05:38.191 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.191 passed 00:05:38.191 00:05:38.191 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.191 suites 1 1 n/a 0 0 00:05:38.191 tests 2 2 2 0 0 00:05:38.191 asserts 497 497 497 0 n/a 00:05:38.191 00:05:38.191 Elapsed time = 0.656 seconds 00:05:38.191 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.191 EAL: request: mp_malloc_sync 00:05:38.191 EAL: No shared files mode enabled, IPC is disabled 00:05:38.191 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.191 EAL: No shared files mode enabled, IPC is disabled 00:05:38.191 EAL: No shared files mode enabled, IPC is disabled 00:05:38.191 EAL: No shared files mode enabled, IPC is disabled 00:05:38.191 00:05:38.191 real 0m0.776s 00:05:38.191 user 0m0.411s 00:05:38.191 sys 0m0.342s 00:05:38.191 11:17:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.191 11:17:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.191 ************************************ 00:05:38.191 END TEST env_vtophys 00:05:38.191 ************************************ 00:05:38.191 11:17:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.191 11:17:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.191 11:17:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.191 11:17:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.191 11:17:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.191 ************************************ 00:05:38.191 START TEST env_pci 00:05:38.191 ************************************ 00:05:38.191 11:17:06 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.191 00:05:38.191 00:05:38.191 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.191 http://cunit.sourceforge.net/ 00:05:38.191 00:05:38.191 00:05:38.191 Suite: pci 00:05:38.191 Test: pci_hook ...[2024-07-15 11:17:06.856896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3329449 has claimed it 00:05:38.191 EAL: Cannot find device (10000:00:01.0) 00:05:38.191 EAL: Failed to attach device on primary process 00:05:38.191 passed 00:05:38.191 00:05:38.191 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.191 suites 1 1 n/a 0 0 00:05:38.191 tests 1 1 1 0 0 00:05:38.191 asserts 25 25 25 0 n/a 00:05:38.191 00:05:38.191 Elapsed time = 0.031 seconds 00:05:38.191 00:05:38.191 real 0m0.051s 00:05:38.191 user 0m0.014s 00:05:38.191 sys 0m0.036s 00:05:38.191 11:17:06 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.191 11:17:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.191 ************************************ 00:05:38.191 END TEST env_pci 00:05:38.191 ************************************ 00:05:38.451 11:17:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.451 11:17:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.451 11:17:06 env -- env/env.sh@15 -- # uname 00:05:38.451 11:17:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.451 11:17:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.451 11:17:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.451 11:17:06 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:38.451 11:17:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.451 11:17:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.451 ************************************ 00:05:38.451 START TEST env_dpdk_post_init 00:05:38.451 ************************************ 00:05:38.451 11:17:06 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.451 EAL: Detected CPU lcores: 128 00:05:38.451 EAL: Detected NUMA nodes: 2 00:05:38.451 EAL: Detected shared linkage of DPDK 00:05:38.451 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.451 EAL: Selected IOVA mode 'VA' 00:05:38.451 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.451 EAL: VFIO support initialized 00:05:38.451 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.451 EAL: Using IOMMU type 1 (Type 1) 00:05:38.711 EAL: Ignore mapping IO port bar(1) 00:05:38.711 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:38.971 EAL: Ignore mapping IO port bar(1) 00:05:38.971 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:38.971 EAL: Ignore mapping IO port bar(1) 00:05:39.232 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:39.232 EAL: Ignore mapping IO port bar(1) 00:05:39.492 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:39.492 EAL: Ignore mapping IO port bar(1) 00:05:39.752 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:39.752 EAL: Ignore mapping IO port bar(1) 00:05:39.752 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:40.013 EAL: Ignore mapping IO port bar(1) 00:05:40.013 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:40.273 EAL: Ignore mapping IO port bar(1) 00:05:40.273 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:40.532 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:40.532 EAL: Ignore mapping IO port bar(1) 00:05:40.791 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:40.792 EAL: Ignore mapping IO port bar(1) 00:05:41.051 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:41.051 EAL: Ignore mapping IO port bar(1) 00:05:41.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:41.310 EAL: Ignore mapping IO port bar(1) 00:05:41.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:41.570 EAL: Ignore mapping IO port bar(1) 00:05:41.570 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:41.830 EAL: Ignore mapping IO port bar(1) 00:05:41.830 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:42.090 EAL: Ignore mapping IO port bar(1) 00:05:42.090 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:42.090 EAL: Ignore mapping IO port bar(1) 00:05:42.350 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:42.350 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:42.350 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:42.350 Starting DPDK initialization... 00:05:42.350 Starting SPDK post initialization... 00:05:42.350 SPDK NVMe probe 00:05:42.350 Attaching to 0000:65:00.0 00:05:42.350 Attached to 0000:65:00.0 00:05:42.350 Cleaning up... 00:05:44.256 00:05:44.256 real 0m5.711s 00:05:44.256 user 0m0.178s 00:05:44.256 sys 0m0.075s 00:05:44.256 11:17:12 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.256 11:17:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.256 ************************************ 00:05:44.256 END TEST env_dpdk_post_init 00:05:44.256 ************************************ 00:05:44.256 11:17:12 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.256 11:17:12 env -- env/env.sh@26 -- # uname 00:05:44.256 11:17:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.256 11:17:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.256 11:17:12 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.256 11:17:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.256 11:17:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.256 ************************************ 00:05:44.256 START TEST env_mem_callbacks 00:05:44.256 ************************************ 00:05:44.256 11:17:12 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.256 EAL: Detected CPU lcores: 128 00:05:44.256 EAL: Detected NUMA nodes: 2 00:05:44.256 EAL: Detected shared linkage of DPDK 00:05:44.256 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.256 EAL: Selected IOVA mode 'VA' 00:05:44.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.256 EAL: VFIO support initialized 00:05:44.256 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.256 00:05:44.256 00:05:44.256 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.256 http://cunit.sourceforge.net/ 00:05:44.256 00:05:44.256 00:05:44.256 Suite: memory 00:05:44.256 Test: test ... 00:05:44.256 register 0x200000200000 2097152 00:05:44.256 malloc 3145728 00:05:44.256 register 0x200000400000 4194304 00:05:44.256 buf 0x200000500000 len 3145728 PASSED 00:05:44.256 malloc 64 00:05:44.256 buf 0x2000004fff40 len 64 PASSED 00:05:44.256 malloc 4194304 00:05:44.256 register 0x200000800000 6291456 00:05:44.256 buf 0x200000a00000 len 4194304 PASSED 00:05:44.256 free 0x200000500000 3145728 00:05:44.256 free 0x2000004fff40 64 00:05:44.256 unregister 0x200000400000 4194304 PASSED 00:05:44.256 free 0x200000a00000 4194304 00:05:44.256 unregister 0x200000800000 6291456 PASSED 00:05:44.256 malloc 8388608 00:05:44.256 register 0x200000400000 10485760 00:05:44.256 buf 0x200000600000 len 8388608 PASSED 00:05:44.256 free 0x200000600000 8388608 00:05:44.256 unregister 0x200000400000 10485760 PASSED 00:05:44.256 passed 00:05:44.257 00:05:44.257 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.257 suites 1 1 n/a 0 0 00:05:44.257 tests 1 1 1 0 0 00:05:44.257 asserts 15 15 15 0 n/a 00:05:44.257 00:05:44.257 Elapsed time = 0.008 seconds 00:05:44.257 00:05:44.257 real 0m0.065s 00:05:44.257 user 0m0.024s 00:05:44.257 sys 0m0.041s 00:05:44.257 11:17:12 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.257 11:17:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.257 ************************************ 00:05:44.257 END TEST env_mem_callbacks 00:05:44.257 ************************************ 00:05:44.257 11:17:12 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.257 00:05:44.257 real 0m7.313s 00:05:44.257 user 0m1.007s 00:05:44.257 sys 0m0.851s 00:05:44.257 11:17:12 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.257 11:17:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.257 ************************************ 00:05:44.257 END TEST env 00:05:44.257 ************************************ 00:05:44.257 11:17:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.257 11:17:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.257 11:17:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.257 11:17:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.257 11:17:12 -- common/autotest_common.sh@10 -- # set +x 00:05:44.517 ************************************ 00:05:44.517 START TEST rpc 00:05:44.517 ************************************ 00:05:44.517 11:17:12 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.517 * Looking for test storage... 00:05:44.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.517 11:17:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3330901 00:05:44.517 11:17:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.517 11:17:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.517 11:17:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3330901 00:05:44.517 11:17:13 rpc -- common/autotest_common.sh@829 -- # '[' -z 3330901 ']' 00:05:44.517 11:17:13 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.517 11:17:13 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.517 11:17:13 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.517 11:17:13 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.517 11:17:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.517 [2024-07-15 11:17:13.123986] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:05:44.517 [2024-07-15 11:17:13.124040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330901 ] 00:05:44.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.517 [2024-07-15 11:17:13.182943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.777 [2024-07-15 11:17:13.246781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.777 [2024-07-15 11:17:13.246819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3330901' to capture a snapshot of events at runtime. 00:05:44.777 [2024-07-15 11:17:13.246827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.777 [2024-07-15 11:17:13.246833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.777 [2024-07-15 11:17:13.246839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3330901 for offline analysis/debug. 00:05:44.777 [2024-07-15 11:17:13.246860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.398 11:17:13 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.398 11:17:13 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.398 11:17:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.398 11:17:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.398 11:17:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.398 11:17:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.398 11:17:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.398 11:17:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.398 11:17:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 ************************************ 00:05:45.398 START TEST rpc_integrity 00:05:45.398 ************************************ 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.398 11:17:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.398 11:17:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.398 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.398 { 00:05:45.398 "name": "Malloc0", 00:05:45.398 "aliases": [ 00:05:45.398 "8711a9bf-97c8-4826-9df0-84e1ebcfa20b" 00:05:45.398 ], 00:05:45.398 "product_name": "Malloc disk", 00:05:45.398 "block_size": 512, 00:05:45.398 "num_blocks": 16384, 00:05:45.398 "uuid": "8711a9bf-97c8-4826-9df0-84e1ebcfa20b", 00:05:45.398 "assigned_rate_limits": { 00:05:45.398 "rw_ios_per_sec": 0, 00:05:45.398 "rw_mbytes_per_sec": 0, 00:05:45.398 "r_mbytes_per_sec": 0, 00:05:45.398 "w_mbytes_per_sec": 0 00:05:45.398 }, 00:05:45.398 "claimed": false, 00:05:45.398 "zoned": false, 00:05:45.398 "supported_io_types": { 00:05:45.398 "read": true, 00:05:45.398 "write": true, 00:05:45.398 "unmap": true, 00:05:45.398 "flush": true, 00:05:45.398 "reset": true, 00:05:45.398 "nvme_admin": false, 00:05:45.398 "nvme_io": false, 00:05:45.398 "nvme_io_md": false, 00:05:45.398 "write_zeroes": true, 00:05:45.398 "zcopy": true, 00:05:45.398 "get_zone_info": false, 00:05:45.398 "zone_management": false, 00:05:45.398 "zone_append": false, 00:05:45.398 "compare": false, 00:05:45.398 "compare_and_write": false, 00:05:45.398 "abort": true, 00:05:45.398 "seek_hole": false, 00:05:45.398 "seek_data": false, 00:05:45.398 "copy": true, 00:05:45.398 "nvme_iov_md": false 00:05:45.398 }, 00:05:45.398 "memory_domains": [ 00:05:45.398 { 00:05:45.398 "dma_device_id": "system", 00:05:45.398 "dma_device_type": 1 00:05:45.398 }, 00:05:45.398 { 00:05:45.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.398 "dma_device_type": 2 00:05:45.398 } 00:05:45.398 ], 00:05:45.398 "driver_specific": {} 00:05:45.398 } 00:05:45.398 ]' 00:05:45.398 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.398 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.398 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 [2024-07-15 11:17:14.063310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.398 [2024-07-15 11:17:14.063343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.398 [2024-07-15 11:17:14.063355] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1191d80 00:05:45.398 [2024-07-15 11:17:14.063362] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.398 [2024-07-15 11:17:14.064698] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.398 [2024-07-15 11:17:14.064718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.398 Passthru0 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.398 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.398 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.398 { 00:05:45.398 "name": "Malloc0", 00:05:45.398 "aliases": [ 00:05:45.398 "8711a9bf-97c8-4826-9df0-84e1ebcfa20b" 00:05:45.398 ], 00:05:45.398 "product_name": "Malloc disk", 00:05:45.398 "block_size": 512, 00:05:45.398 "num_blocks": 16384, 00:05:45.398 "uuid": "8711a9bf-97c8-4826-9df0-84e1ebcfa20b", 00:05:45.398 "assigned_rate_limits": { 00:05:45.398 "rw_ios_per_sec": 0, 00:05:45.398 "rw_mbytes_per_sec": 0, 00:05:45.398 "r_mbytes_per_sec": 0, 00:05:45.398 "w_mbytes_per_sec": 0 00:05:45.398 }, 00:05:45.398 "claimed": true, 00:05:45.398 "claim_type": "exclusive_write", 00:05:45.398 "zoned": false, 00:05:45.398 "supported_io_types": { 00:05:45.398 "read": true, 00:05:45.398 "write": true, 00:05:45.398 "unmap": true, 00:05:45.398 "flush": true, 00:05:45.398 "reset": true, 00:05:45.398 "nvme_admin": false, 00:05:45.398 "nvme_io": false, 00:05:45.398 "nvme_io_md": false, 00:05:45.398 "write_zeroes": true, 00:05:45.398 "zcopy": true, 00:05:45.398 "get_zone_info": false, 00:05:45.398 "zone_management": false, 00:05:45.398 "zone_append": false, 00:05:45.398 "compare": false, 00:05:45.398 "compare_and_write": false, 00:05:45.398 "abort": true, 00:05:45.398 "seek_hole": false, 00:05:45.398 "seek_data": false, 00:05:45.398 "copy": true, 00:05:45.398 "nvme_iov_md": false 00:05:45.398 }, 00:05:45.398 "memory_domains": [ 00:05:45.398 { 00:05:45.398 "dma_device_id": "system", 00:05:45.398 "dma_device_type": 1 00:05:45.398 }, 00:05:45.398 { 00:05:45.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.398 "dma_device_type": 2 00:05:45.398 } 00:05:45.398 ], 00:05:45.398 "driver_specific": {} 00:05:45.398 }, 00:05:45.398 { 00:05:45.398 "name": "Passthru0", 00:05:45.398 "aliases": [ 00:05:45.398 "0f409b0f-e9e9-593f-9fb0-b356a2266937" 00:05:45.398 ], 00:05:45.398 "product_name": "passthru", 00:05:45.398 "block_size": 512, 00:05:45.398 "num_blocks": 16384, 00:05:45.398 "uuid": "0f409b0f-e9e9-593f-9fb0-b356a2266937", 00:05:45.398 "assigned_rate_limits": { 00:05:45.398 "rw_ios_per_sec": 0, 00:05:45.398 "rw_mbytes_per_sec": 0, 00:05:45.398 "r_mbytes_per_sec": 0, 00:05:45.398 "w_mbytes_per_sec": 0 00:05:45.398 }, 00:05:45.399 "claimed": false, 00:05:45.399 "zoned": false, 00:05:45.399 "supported_io_types": { 00:05:45.399 "read": true, 00:05:45.399 "write": true, 00:05:45.399 "unmap": true, 00:05:45.399 "flush": true, 00:05:45.399 "reset": true, 00:05:45.399 "nvme_admin": false, 00:05:45.399 "nvme_io": false, 00:05:45.399 "nvme_io_md": false, 00:05:45.399 "write_zeroes": true, 00:05:45.399 "zcopy": true, 00:05:45.399 "get_zone_info": false, 00:05:45.399 "zone_management": false, 00:05:45.399 "zone_append": false, 00:05:45.399 "compare": false, 00:05:45.399 "compare_and_write": false, 00:05:45.399 "abort": true, 00:05:45.399 "seek_hole": false, 00:05:45.399 "seek_data": false, 00:05:45.399 "copy": true, 00:05:45.399 "nvme_iov_md": false 00:05:45.399 }, 00:05:45.399 "memory_domains": [ 00:05:45.399 { 00:05:45.399 "dma_device_id": "system", 00:05:45.399 "dma_device_type": 1 00:05:45.399 }, 00:05:45.399 { 00:05:45.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.399 "dma_device_type": 2 00:05:45.399 } 00:05:45.399 ], 00:05:45.399 "driver_specific": { 00:05:45.399 "passthru": { 00:05:45.399 "name": "Passthru0", 00:05:45.399 "base_bdev_name": "Malloc0" 00:05:45.399 } 00:05:45.399 } 00:05:45.399 } 00:05:45.399 ]' 00:05:45.399 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.659 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.659 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.659 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.659 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.659 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.659 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.659 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.659 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.660 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.660 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.660 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.660 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.660 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.660 11:17:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.660 00:05:45.660 real 0m0.297s 00:05:45.660 user 0m0.194s 00:05:45.660 sys 0m0.039s 00:05:45.660 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.660 11:17:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 ************************************ 00:05:45.660 END TEST rpc_integrity 00:05:45.660 ************************************ 00:05:45.660 11:17:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.660 11:17:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.660 11:17:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.660 11:17:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.660 11:17:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 ************************************ 00:05:45.660 START TEST rpc_plugins 00:05:45.660 ************************************ 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:45.660 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.660 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.660 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.660 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.660 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.660 { 00:05:45.660 "name": "Malloc1", 00:05:45.660 "aliases": [ 00:05:45.660 "4c487640-ce64-492a-a121-97bc3507f532" 00:05:45.660 ], 00:05:45.660 "product_name": "Malloc disk", 00:05:45.660 "block_size": 4096, 00:05:45.660 "num_blocks": 256, 00:05:45.660 "uuid": "4c487640-ce64-492a-a121-97bc3507f532", 00:05:45.660 "assigned_rate_limits": { 00:05:45.660 "rw_ios_per_sec": 0, 00:05:45.660 "rw_mbytes_per_sec": 0, 00:05:45.660 "r_mbytes_per_sec": 0, 00:05:45.660 "w_mbytes_per_sec": 0 00:05:45.660 }, 00:05:45.660 "claimed": false, 00:05:45.660 "zoned": false, 00:05:45.660 "supported_io_types": { 00:05:45.660 "read": true, 00:05:45.660 "write": true, 00:05:45.660 "unmap": true, 00:05:45.660 "flush": true, 00:05:45.660 "reset": true, 00:05:45.660 "nvme_admin": false, 00:05:45.660 "nvme_io": false, 00:05:45.660 "nvme_io_md": false, 00:05:45.660 "write_zeroes": true, 00:05:45.660 "zcopy": true, 00:05:45.660 "get_zone_info": false, 00:05:45.660 "zone_management": false, 00:05:45.660 "zone_append": false, 00:05:45.660 "compare": false, 00:05:45.660 "compare_and_write": false, 00:05:45.660 "abort": true, 00:05:45.660 "seek_hole": false, 00:05:45.660 "seek_data": false, 00:05:45.660 "copy": true, 00:05:45.660 "nvme_iov_md": false 00:05:45.660 }, 00:05:45.660 "memory_domains": [ 00:05:45.660 { 00:05:45.660 "dma_device_id": "system", 00:05:45.660 "dma_device_type": 1 00:05:45.660 }, 00:05:45.660 { 00:05:45.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.660 "dma_device_type": 2 00:05:45.660 } 00:05:45.660 ], 00:05:45.660 "driver_specific": {} 00:05:45.660 } 00:05:45.660 ]' 00:05:45.660 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.920 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.920 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.920 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.920 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.920 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.920 11:17:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.920 00:05:45.920 real 0m0.149s 00:05:45.920 user 0m0.092s 00:05:45.920 sys 0m0.021s 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.920 11:17:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.920 ************************************ 00:05:45.920 END TEST rpc_plugins 00:05:45.920 ************************************ 00:05:45.920 11:17:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.920 11:17:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.920 11:17:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.920 11:17:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.920 11:17:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.920 ************************************ 00:05:45.920 START TEST rpc_trace_cmd_test 00:05:45.920 ************************************ 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.920 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3330901", 00:05:45.920 "tpoint_group_mask": "0x8", 00:05:45.920 "iscsi_conn": { 00:05:45.920 "mask": "0x2", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "scsi": { 00:05:45.920 "mask": "0x4", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "bdev": { 00:05:45.920 "mask": "0x8", 00:05:45.920 "tpoint_mask": "0xffffffffffffffff" 00:05:45.920 }, 00:05:45.920 "nvmf_rdma": { 00:05:45.920 "mask": "0x10", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "nvmf_tcp": { 00:05:45.920 "mask": "0x20", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "ftl": { 00:05:45.920 "mask": "0x40", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "blobfs": { 00:05:45.920 "mask": "0x80", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "dsa": { 00:05:45.920 "mask": "0x200", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "thread": { 00:05:45.920 "mask": "0x400", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "nvme_pcie": { 00:05:45.920 "mask": "0x800", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "iaa": { 00:05:45.920 "mask": "0x1000", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "nvme_tcp": { 00:05:45.920 "mask": "0x2000", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "bdev_nvme": { 00:05:45.920 "mask": "0x4000", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 }, 00:05:45.920 "sock": { 00:05:45.920 "mask": "0x8000", 00:05:45.920 "tpoint_mask": "0x0" 00:05:45.920 } 00:05:45.920 }' 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:45.920 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.180 00:05:46.180 real 0m0.244s 00:05:46.180 user 0m0.204s 00:05:46.180 sys 0m0.031s 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.180 11:17:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.180 ************************************ 00:05:46.180 END TEST rpc_trace_cmd_test 00:05:46.180 ************************************ 00:05:46.180 11:17:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.180 11:17:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.180 11:17:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.180 11:17:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.180 11:17:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.180 11:17:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.180 11:17:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.180 ************************************ 00:05:46.180 START TEST rpc_daemon_integrity 00:05:46.180 ************************************ 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.180 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.440 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.440 { 00:05:46.440 "name": "Malloc2", 00:05:46.440 "aliases": [ 00:05:46.440 "2dfecced-97dd-477e-a3fb-f065b92f93a1" 00:05:46.440 ], 00:05:46.440 "product_name": "Malloc disk", 00:05:46.440 "block_size": 512, 00:05:46.440 "num_blocks": 16384, 00:05:46.440 "uuid": "2dfecced-97dd-477e-a3fb-f065b92f93a1", 00:05:46.440 "assigned_rate_limits": { 00:05:46.440 "rw_ios_per_sec": 0, 00:05:46.440 "rw_mbytes_per_sec": 0, 00:05:46.440 "r_mbytes_per_sec": 0, 00:05:46.440 "w_mbytes_per_sec": 0 00:05:46.440 }, 00:05:46.440 "claimed": false, 00:05:46.440 "zoned": false, 00:05:46.440 "supported_io_types": { 00:05:46.440 "read": true, 00:05:46.440 "write": true, 00:05:46.440 "unmap": true, 00:05:46.440 "flush": true, 00:05:46.440 "reset": true, 00:05:46.440 "nvme_admin": false, 00:05:46.440 "nvme_io": false, 00:05:46.440 "nvme_io_md": false, 00:05:46.440 "write_zeroes": true, 00:05:46.440 "zcopy": true, 00:05:46.440 "get_zone_info": false, 00:05:46.440 "zone_management": false, 00:05:46.440 "zone_append": false, 00:05:46.440 "compare": false, 00:05:46.440 "compare_and_write": false, 00:05:46.440 "abort": true, 00:05:46.440 "seek_hole": false, 00:05:46.440 "seek_data": false, 00:05:46.440 "copy": true, 00:05:46.440 "nvme_iov_md": false 00:05:46.441 }, 00:05:46.441 "memory_domains": [ 00:05:46.441 { 00:05:46.441 "dma_device_id": "system", 00:05:46.441 "dma_device_type": 1 00:05:46.441 }, 00:05:46.441 { 00:05:46.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.441 "dma_device_type": 2 00:05:46.441 } 00:05:46.441 ], 00:05:46.441 "driver_specific": {} 00:05:46.441 } 00:05:46.441 ]' 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.441 [2024-07-15 11:17:14.973788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.441 [2024-07-15 11:17:14.973815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.441 [2024-07-15 11:17:14.973826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1192a90 00:05:46.441 [2024-07-15 11:17:14.973833] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.441 [2024-07-15 11:17:14.975043] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.441 [2024-07-15 11:17:14.975062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.441 Passthru0 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.441 11:17:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.441 { 00:05:46.441 "name": "Malloc2", 00:05:46.441 "aliases": [ 00:05:46.441 "2dfecced-97dd-477e-a3fb-f065b92f93a1" 00:05:46.441 ], 00:05:46.441 "product_name": "Malloc disk", 00:05:46.441 "block_size": 512, 00:05:46.441 "num_blocks": 16384, 00:05:46.441 "uuid": "2dfecced-97dd-477e-a3fb-f065b92f93a1", 00:05:46.441 "assigned_rate_limits": { 00:05:46.441 "rw_ios_per_sec": 0, 00:05:46.441 "rw_mbytes_per_sec": 0, 00:05:46.441 "r_mbytes_per_sec": 0, 00:05:46.441 "w_mbytes_per_sec": 0 00:05:46.441 }, 00:05:46.441 "claimed": true, 00:05:46.441 "claim_type": "exclusive_write", 00:05:46.441 "zoned": false, 00:05:46.441 "supported_io_types": { 00:05:46.441 "read": true, 00:05:46.441 "write": true, 00:05:46.441 "unmap": true, 00:05:46.441 "flush": true, 00:05:46.441 "reset": true, 00:05:46.441 "nvme_admin": false, 00:05:46.441 "nvme_io": false, 00:05:46.441 "nvme_io_md": false, 00:05:46.441 "write_zeroes": true, 00:05:46.441 "zcopy": true, 00:05:46.441 "get_zone_info": false, 00:05:46.441 "zone_management": false, 00:05:46.441 "zone_append": false, 00:05:46.441 "compare": false, 00:05:46.441 "compare_and_write": false, 00:05:46.441 "abort": true, 00:05:46.441 "seek_hole": false, 00:05:46.441 "seek_data": false, 00:05:46.441 "copy": true, 00:05:46.441 "nvme_iov_md": false 00:05:46.441 }, 00:05:46.441 "memory_domains": [ 00:05:46.441 { 00:05:46.441 "dma_device_id": "system", 00:05:46.441 "dma_device_type": 1 00:05:46.441 }, 00:05:46.441 { 00:05:46.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.441 "dma_device_type": 2 00:05:46.441 } 00:05:46.441 ], 00:05:46.441 "driver_specific": {} 00:05:46.441 }, 00:05:46.441 { 00:05:46.441 "name": "Passthru0", 00:05:46.441 "aliases": [ 00:05:46.441 "c4ed72c1-ee89-54f9-b7ab-8ebbd4b5eca4" 00:05:46.441 ], 00:05:46.441 "product_name": "passthru", 00:05:46.441 "block_size": 512, 00:05:46.441 "num_blocks": 16384, 00:05:46.441 "uuid": "c4ed72c1-ee89-54f9-b7ab-8ebbd4b5eca4", 00:05:46.441 "assigned_rate_limits": { 00:05:46.441 "rw_ios_per_sec": 0, 00:05:46.441 "rw_mbytes_per_sec": 0, 00:05:46.441 "r_mbytes_per_sec": 0, 00:05:46.441 "w_mbytes_per_sec": 0 00:05:46.441 }, 00:05:46.441 "claimed": false, 00:05:46.441 "zoned": false, 00:05:46.441 "supported_io_types": { 00:05:46.441 "read": true, 00:05:46.441 "write": true, 00:05:46.441 "unmap": true, 00:05:46.441 "flush": true, 00:05:46.441 "reset": true, 00:05:46.441 "nvme_admin": false, 00:05:46.441 "nvme_io": false, 00:05:46.441 "nvme_io_md": false, 00:05:46.441 "write_zeroes": true, 00:05:46.441 "zcopy": true, 00:05:46.441 "get_zone_info": false, 00:05:46.441 "zone_management": false, 00:05:46.441 "zone_append": false, 00:05:46.441 "compare": false, 00:05:46.441 "compare_and_write": false, 00:05:46.441 "abort": true, 00:05:46.441 "seek_hole": false, 00:05:46.441 "seek_data": false, 00:05:46.441 "copy": true, 00:05:46.441 "nvme_iov_md": false 00:05:46.441 }, 00:05:46.441 "memory_domains": [ 00:05:46.441 { 00:05:46.441 "dma_device_id": "system", 00:05:46.441 "dma_device_type": 1 00:05:46.441 }, 00:05:46.441 { 00:05:46.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.441 "dma_device_type": 2 00:05:46.441 } 00:05:46.441 ], 00:05:46.441 "driver_specific": { 00:05:46.441 "passthru": { 00:05:46.441 "name": "Passthru0", 00:05:46.441 "base_bdev_name": "Malloc2" 00:05:46.441 } 00:05:46.441 } 00:05:46.441 } 00:05:46.441 ]' 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.441 00:05:46.441 real 0m0.286s 00:05:46.441 user 0m0.181s 00:05:46.441 sys 0m0.047s 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.441 11:17:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.441 ************************************ 00:05:46.441 END TEST rpc_daemon_integrity 00:05:46.441 ************************************ 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.701 11:17:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.701 11:17:15 rpc -- rpc/rpc.sh@84 -- # killprocess 3330901 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@948 -- # '[' -z 3330901 ']' 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@952 -- # kill -0 3330901 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330901 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330901' 00:05:46.701 killing process with pid 3330901 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@967 -- # kill 3330901 00:05:46.701 11:17:15 rpc -- common/autotest_common.sh@972 -- # wait 3330901 00:05:46.961 00:05:46.961 real 0m2.459s 00:05:46.961 user 0m3.235s 00:05:46.961 sys 0m0.696s 00:05:46.961 11:17:15 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.961 11:17:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.961 ************************************ 00:05:46.961 END TEST rpc 00:05:46.961 ************************************ 00:05:46.961 11:17:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.961 11:17:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:46.961 11:17:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.961 11:17:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.961 11:17:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.961 ************************************ 00:05:46.961 START TEST skip_rpc 00:05:46.961 ************************************ 00:05:46.961 11:17:15 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:46.961 * Looking for test storage... 00:05:46.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.961 11:17:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:46.961 11:17:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.961 11:17:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.961 11:17:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.961 11:17:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.961 11:17:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.961 ************************************ 00:05:46.961 START TEST skip_rpc 00:05:46.961 ************************************ 00:05:46.961 11:17:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:46.961 11:17:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3331423 00:05:46.961 11:17:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.961 11:17:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:46.961 11:17:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.220 [2024-07-15 11:17:15.683710] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:05:47.220 [2024-07-15 11:17:15.683778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331423 ] 00:05:47.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.220 [2024-07-15 11:17:15.747451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.220 [2024-07-15 11:17:15.823348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3331423 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3331423 ']' 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3331423 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3331423 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3331423' 00:05:52.500 killing process with pid 3331423 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3331423 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3331423 00:05:52.500 00:05:52.500 real 0m5.274s 00:05:52.500 user 0m5.077s 00:05:52.500 sys 0m0.232s 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.500 11:17:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.500 ************************************ 00:05:52.500 END TEST skip_rpc 00:05:52.500 ************************************ 00:05:52.500 11:17:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.500 11:17:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.500 11:17:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.500 11:17:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.500 11:17:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.500 ************************************ 00:05:52.500 START TEST skip_rpc_with_json 00:05:52.500 ************************************ 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3332568 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3332568 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3332568 ']' 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.500 11:17:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.500 [2024-07-15 11:17:21.029562] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:05:52.500 [2024-07-15 11:17:21.029617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332568 ] 00:05:52.500 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.500 [2024-07-15 11:17:21.091422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.500 [2024-07-15 11:17:21.159925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.440 [2024-07-15 11:17:21.802830] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.440 request: 00:05:53.440 { 00:05:53.440 "trtype": "tcp", 00:05:53.440 "method": "nvmf_get_transports", 00:05:53.440 "req_id": 1 00:05:53.440 } 00:05:53.440 Got JSON-RPC error response 00:05:53.440 response: 00:05:53.440 { 00:05:53.440 "code": -19, 00:05:53.440 "message": "No such device" 00:05:53.440 } 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.440 [2024-07-15 11:17:21.814952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.440 11:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.440 { 00:05:53.440 "subsystems": [ 00:05:53.440 { 00:05:53.440 "subsystem": "vfio_user_target", 00:05:53.440 "config": null 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "subsystem": "keyring", 00:05:53.440 "config": [] 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "subsystem": "iobuf", 00:05:53.440 "config": [ 00:05:53.440 { 00:05:53.440 "method": "iobuf_set_options", 00:05:53.440 "params": { 00:05:53.440 "small_pool_count": 8192, 00:05:53.440 "large_pool_count": 1024, 00:05:53.440 "small_bufsize": 8192, 00:05:53.440 "large_bufsize": 135168 00:05:53.440 } 00:05:53.440 } 00:05:53.440 ] 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "subsystem": "sock", 00:05:53.440 "config": [ 00:05:53.440 { 00:05:53.440 "method": "sock_set_default_impl", 00:05:53.440 "params": { 00:05:53.440 "impl_name": "posix" 00:05:53.440 } 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "method": "sock_impl_set_options", 00:05:53.440 "params": { 00:05:53.440 "impl_name": "ssl", 00:05:53.440 "recv_buf_size": 4096, 00:05:53.440 "send_buf_size": 4096, 00:05:53.440 "enable_recv_pipe": true, 00:05:53.440 "enable_quickack": false, 00:05:53.440 "enable_placement_id": 0, 00:05:53.440 "enable_zerocopy_send_server": true, 00:05:53.440 "enable_zerocopy_send_client": false, 00:05:53.440 "zerocopy_threshold": 0, 00:05:53.440 "tls_version": 0, 00:05:53.440 "enable_ktls": false 00:05:53.440 } 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "method": "sock_impl_set_options", 00:05:53.440 "params": { 00:05:53.440 "impl_name": "posix", 00:05:53.440 "recv_buf_size": 2097152, 00:05:53.440 "send_buf_size": 2097152, 00:05:53.440 "enable_recv_pipe": true, 00:05:53.440 "enable_quickack": false, 00:05:53.440 "enable_placement_id": 0, 00:05:53.440 "enable_zerocopy_send_server": true, 00:05:53.440 "enable_zerocopy_send_client": false, 00:05:53.440 "zerocopy_threshold": 0, 00:05:53.440 "tls_version": 0, 00:05:53.440 "enable_ktls": false 00:05:53.440 } 00:05:53.440 } 00:05:53.440 ] 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "subsystem": "vmd", 00:05:53.440 "config": [] 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "subsystem": "accel", 00:05:53.440 "config": [ 00:05:53.440 { 00:05:53.440 "method": "accel_set_options", 00:05:53.440 "params": { 00:05:53.440 "small_cache_size": 128, 00:05:53.440 "large_cache_size": 16, 00:05:53.440 "task_count": 2048, 00:05:53.440 "sequence_count": 2048, 00:05:53.440 "buf_count": 2048 00:05:53.440 } 00:05:53.440 } 00:05:53.440 ] 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "subsystem": "bdev", 00:05:53.440 "config": [ 00:05:53.440 { 00:05:53.440 "method": "bdev_set_options", 00:05:53.440 "params": { 00:05:53.440 "bdev_io_pool_size": 65535, 00:05:53.440 "bdev_io_cache_size": 256, 00:05:53.440 "bdev_auto_examine": true, 00:05:53.440 "iobuf_small_cache_size": 128, 00:05:53.440 "iobuf_large_cache_size": 16 00:05:53.440 } 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "method": "bdev_raid_set_options", 00:05:53.440 "params": { 00:05:53.440 "process_window_size_kb": 1024 00:05:53.440 } 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "method": "bdev_iscsi_set_options", 00:05:53.440 "params": { 00:05:53.440 "timeout_sec": 30 00:05:53.440 } 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "method": "bdev_nvme_set_options", 00:05:53.440 "params": { 00:05:53.440 "action_on_timeout": "none", 00:05:53.440 "timeout_us": 0, 00:05:53.440 "timeout_admin_us": 0, 00:05:53.440 "keep_alive_timeout_ms": 10000, 00:05:53.440 "arbitration_burst": 0, 00:05:53.440 "low_priority_weight": 0, 00:05:53.440 "medium_priority_weight": 0, 00:05:53.440 "high_priority_weight": 0, 00:05:53.440 "nvme_adminq_poll_period_us": 10000, 00:05:53.440 "nvme_ioq_poll_period_us": 0, 00:05:53.440 "io_queue_requests": 0, 00:05:53.440 "delay_cmd_submit": true, 00:05:53.440 "transport_retry_count": 4, 00:05:53.440 "bdev_retry_count": 3, 00:05:53.440 "transport_ack_timeout": 0, 00:05:53.440 "ctrlr_loss_timeout_sec": 0, 00:05:53.440 "reconnect_delay_sec": 0, 00:05:53.440 "fast_io_fail_timeout_sec": 0, 00:05:53.440 "disable_auto_failback": false, 00:05:53.440 "generate_uuids": false, 00:05:53.440 "transport_tos": 0, 00:05:53.440 "nvme_error_stat": false, 00:05:53.440 "rdma_srq_size": 0, 00:05:53.440 "io_path_stat": false, 00:05:53.440 "allow_accel_sequence": false, 00:05:53.440 "rdma_max_cq_size": 0, 00:05:53.440 "rdma_cm_event_timeout_ms": 0, 00:05:53.440 "dhchap_digests": [ 00:05:53.440 "sha256", 00:05:53.440 "sha384", 00:05:53.440 "sha512" 00:05:53.440 ], 00:05:53.440 "dhchap_dhgroups": [ 00:05:53.440 "null", 00:05:53.440 "ffdhe2048", 00:05:53.440 "ffdhe3072", 00:05:53.440 "ffdhe4096", 00:05:53.440 "ffdhe6144", 00:05:53.440 "ffdhe8192" 00:05:53.440 ] 00:05:53.440 } 00:05:53.440 }, 00:05:53.440 { 00:05:53.440 "method": "bdev_nvme_set_hotplug", 00:05:53.440 "params": { 00:05:53.441 "period_us": 100000, 00:05:53.441 "enable": false 00:05:53.441 } 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "method": "bdev_wait_for_examine" 00:05:53.441 } 00:05:53.441 ] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "scsi", 00:05:53.441 "config": null 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "scheduler", 00:05:53.441 "config": [ 00:05:53.441 { 00:05:53.441 "method": "framework_set_scheduler", 00:05:53.441 "params": { 00:05:53.441 "name": "static" 00:05:53.441 } 00:05:53.441 } 00:05:53.441 ] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "vhost_scsi", 00:05:53.441 "config": [] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "vhost_blk", 00:05:53.441 "config": [] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "ublk", 00:05:53.441 "config": [] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "nbd", 00:05:53.441 "config": [] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "nvmf", 00:05:53.441 "config": [ 00:05:53.441 { 00:05:53.441 "method": "nvmf_set_config", 00:05:53.441 "params": { 00:05:53.441 "discovery_filter": "match_any", 00:05:53.441 "admin_cmd_passthru": { 00:05:53.441 "identify_ctrlr": false 00:05:53.441 } 00:05:53.441 } 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "method": "nvmf_set_max_subsystems", 00:05:53.441 "params": { 00:05:53.441 "max_subsystems": 1024 00:05:53.441 } 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "method": "nvmf_set_crdt", 00:05:53.441 "params": { 00:05:53.441 "crdt1": 0, 00:05:53.441 "crdt2": 0, 00:05:53.441 "crdt3": 0 00:05:53.441 } 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "method": "nvmf_create_transport", 00:05:53.441 "params": { 00:05:53.441 "trtype": "TCP", 00:05:53.441 "max_queue_depth": 128, 00:05:53.441 "max_io_qpairs_per_ctrlr": 127, 00:05:53.441 "in_capsule_data_size": 4096, 00:05:53.441 "max_io_size": 131072, 00:05:53.441 "io_unit_size": 131072, 00:05:53.441 "max_aq_depth": 128, 00:05:53.441 "num_shared_buffers": 511, 00:05:53.441 "buf_cache_size": 4294967295, 00:05:53.441 "dif_insert_or_strip": false, 00:05:53.441 "zcopy": false, 00:05:53.441 "c2h_success": true, 00:05:53.441 "sock_priority": 0, 00:05:53.441 "abort_timeout_sec": 1, 00:05:53.441 "ack_timeout": 0, 00:05:53.441 "data_wr_pool_size": 0 00:05:53.441 } 00:05:53.441 } 00:05:53.441 ] 00:05:53.441 }, 00:05:53.441 { 00:05:53.441 "subsystem": "iscsi", 00:05:53.441 "config": [ 00:05:53.441 { 00:05:53.441 "method": "iscsi_set_options", 00:05:53.441 "params": { 00:05:53.441 "node_base": "iqn.2016-06.io.spdk", 00:05:53.441 "max_sessions": 128, 00:05:53.441 "max_connections_per_session": 2, 00:05:53.441 "max_queue_depth": 64, 00:05:53.441 "default_time2wait": 2, 00:05:53.441 "default_time2retain": 20, 00:05:53.441 "first_burst_length": 8192, 00:05:53.441 "immediate_data": true, 00:05:53.441 "allow_duplicated_isid": false, 00:05:53.441 "error_recovery_level": 0, 00:05:53.441 "nop_timeout": 60, 00:05:53.441 "nop_in_interval": 30, 00:05:53.441 "disable_chap": false, 00:05:53.441 "require_chap": false, 00:05:53.441 "mutual_chap": false, 00:05:53.441 "chap_group": 0, 00:05:53.441 "max_large_datain_per_connection": 64, 00:05:53.441 "max_r2t_per_connection": 4, 00:05:53.441 "pdu_pool_size": 36864, 00:05:53.441 "immediate_data_pool_size": 16384, 00:05:53.441 "data_out_pool_size": 2048 00:05:53.441 } 00:05:53.441 } 00:05:53.441 ] 00:05:53.441 } 00:05:53.441 ] 00:05:53.441 } 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3332568 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3332568 ']' 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3332568 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.441 11:17:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3332568 00:05:53.441 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.441 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.441 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3332568' 00:05:53.441 killing process with pid 3332568 00:05:53.441 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3332568 00:05:53.441 11:17:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3332568 00:05:53.700 11:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.700 11:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3332807 00:05:53.700 11:17:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3332807 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3332807 ']' 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3332807 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3332807 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3332807' 00:05:58.980 killing process with pid 3332807 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3332807 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3332807 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:58.980 00:05:58.980 real 0m6.544s 00:05:58.980 user 0m6.444s 00:05:58.980 sys 0m0.506s 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.980 ************************************ 00:05:58.980 END TEST skip_rpc_with_json 00:05:58.980 ************************************ 00:05:58.980 11:17:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.980 11:17:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:58.980 11:17:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.980 11:17:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.980 11:17:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.980 ************************************ 00:05:58.980 START TEST skip_rpc_with_delay 00:05:58.980 ************************************ 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.980 [2024-07-15 11:17:27.661202] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:58.980 [2024-07-15 11:17:27.661286] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.980 00:05:58.980 real 0m0.086s 00:05:58.980 user 0m0.057s 00:05:58.980 sys 0m0.028s 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.980 11:17:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:58.980 ************************************ 00:05:58.980 END TEST skip_rpc_with_delay 00:05:58.980 ************************************ 00:05:59.240 11:17:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.240 11:17:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.240 11:17:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.240 11:17:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.240 11:17:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.240 11:17:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.240 11:17:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.240 ************************************ 00:05:59.240 START TEST exit_on_failed_rpc_init 00:05:59.240 ************************************ 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3334018 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3334018 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3334018 ']' 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.240 11:17:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.240 [2024-07-15 11:17:27.817672] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:05:59.240 [2024-07-15 11:17:27.817734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334018 ] 00:05:59.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.240 [2024-07-15 11:17:27.883249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.499 [2024-07-15 11:17:27.959546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.067 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.067 [2024-07-15 11:17:28.652453] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:00.067 [2024-07-15 11:17:28.652506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334199 ] 00:06:00.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.067 [2024-07-15 11:17:28.728096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.326 [2024-07-15 11:17:28.792020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.326 [2024-07-15 11:17:28.792083] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.326 [2024-07-15 11:17:28.792093] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.326 [2024-07-15 11:17:28.792099] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3334018 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3334018 ']' 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3334018 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3334018 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3334018' 00:06:00.326 killing process with pid 3334018 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3334018 00:06:00.326 11:17:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3334018 00:06:00.586 00:06:00.586 real 0m1.356s 00:06:00.586 user 0m1.588s 00:06:00.586 sys 0m0.378s 00:06:00.586 11:17:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.586 11:17:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.586 ************************************ 00:06:00.586 END TEST exit_on_failed_rpc_init 00:06:00.586 ************************************ 00:06:00.586 11:17:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.586 11:17:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:00.586 00:06:00.586 real 0m13.657s 00:06:00.586 user 0m13.325s 00:06:00.586 sys 0m1.403s 00:06:00.586 11:17:29 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.586 11:17:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.586 ************************************ 00:06:00.586 END TEST skip_rpc 00:06:00.586 ************************************ 00:06:00.586 11:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.586 11:17:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.586 11:17:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.586 11:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.586 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.586 ************************************ 00:06:00.586 START TEST rpc_client 00:06:00.586 ************************************ 00:06:00.586 11:17:29 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:00.848 * Looking for test storage... 00:06:00.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:00.848 11:17:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:00.848 OK 00:06:00.848 11:17:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.848 00:06:00.848 real 0m0.127s 00:06:00.848 user 0m0.050s 00:06:00.848 sys 0m0.085s 00:06:00.848 11:17:29 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.848 11:17:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:00.848 ************************************ 00:06:00.848 END TEST rpc_client 00:06:00.848 ************************************ 00:06:00.848 11:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.848 11:17:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.848 11:17:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.848 11:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.848 11:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:00.848 ************************************ 00:06:00.848 START TEST json_config 00:06:00.848 ************************************ 00:06:00.848 11:17:29 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.848 11:17:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.848 11:17:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.848 11:17:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.848 11:17:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.848 11:17:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.848 11:17:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.848 11:17:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:00.848 11:17:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@47 -- # : 0 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:00.848 11:17:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:00.848 INFO: JSON configuration test init 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:00.848 11:17:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.848 11:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.848 11:17:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:00.848 11:17:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.848 11:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.110 11:17:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.110 11:17:29 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.110 11:17:29 json_config -- json_config/common.sh@10 -- # shift 00:06:01.110 11:17:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.110 11:17:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.110 11:17:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.110 11:17:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.110 11:17:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.110 11:17:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3334514 00:06:01.110 11:17:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.110 Waiting for target to run... 00:06:01.110 11:17:29 json_config -- json_config/common.sh@25 -- # waitforlisten 3334514 /var/tmp/spdk_tgt.sock 00:06:01.110 11:17:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.110 11:17:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 3334514 ']' 00:06:01.110 11:17:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.110 11:17:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.110 11:17:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.110 11:17:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.110 11:17:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.110 [2024-07-15 11:17:29.613718] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:01.110 [2024-07-15 11:17:29.613782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334514 ] 00:06:01.110 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.372 [2024-07-15 11:17:29.923303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.372 [2024-07-15 11:17:29.980292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.942 11:17:30 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.942 11:17:30 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:01.942 11:17:30 json_config -- json_config/common.sh@26 -- # echo '' 00:06:01.942 00:06:01.942 11:17:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:01.942 11:17:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:01.942 11:17:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.942 11:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.942 11:17:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:01.942 11:17:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:01.942 11:17:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.942 11:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.942 11:17:30 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:01.942 11:17:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:01.942 11:17:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:02.511 11:17:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.511 11:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:02.511 11:17:30 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:02.511 11:17:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:02.511 11:17:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.511 11:17:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:02.511 11:17:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.511 11:17:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:02.511 11:17:31 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:02.512 11:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:02.772 MallocForNvmf0 00:06:02.772 11:17:31 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.772 11:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.033 MallocForNvmf1 00:06:03.033 11:17:31 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.033 11:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.033 [2024-07-15 11:17:31.630901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.033 11:17:31 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.033 11:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.294 11:17:31 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.294 11:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.294 11:17:31 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.294 11:17:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.554 11:17:32 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.554 11:17:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.814 [2024-07-15 11:17:32.264910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.814 11:17:32 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:03.814 11:17:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.814 11:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.814 11:17:32 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:03.814 11:17:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.814 11:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.814 11:17:32 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:03.814 11:17:32 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.814 11:17:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.814 MallocBdevForConfigChangeCheck 00:06:04.075 11:17:32 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:04.075 11:17:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.075 11:17:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.075 11:17:32 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:04.075 11:17:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.335 11:17:32 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:04.335 INFO: shutting down applications... 00:06:04.335 11:17:32 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:04.335 11:17:32 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:04.335 11:17:32 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:04.335 11:17:32 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:04.595 Calling clear_iscsi_subsystem 00:06:04.595 Calling clear_nvmf_subsystem 00:06:04.595 Calling clear_nbd_subsystem 00:06:04.595 Calling clear_ublk_subsystem 00:06:04.595 Calling clear_vhost_blk_subsystem 00:06:04.595 Calling clear_vhost_scsi_subsystem 00:06:04.595 Calling clear_bdev_subsystem 00:06:04.595 11:17:33 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:04.595 11:17:33 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:04.595 11:17:33 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:04.595 11:17:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.595 11:17:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:04.595 11:17:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:05.165 11:17:33 json_config -- json_config/json_config.sh@345 -- # break 00:06:05.165 11:17:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:05.165 11:17:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:05.165 11:17:33 json_config -- json_config/common.sh@31 -- # local app=target 00:06:05.165 11:17:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.165 11:17:33 json_config -- json_config/common.sh@35 -- # [[ -n 3334514 ]] 00:06:05.165 11:17:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3334514 00:06:05.165 11:17:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.165 11:17:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.165 11:17:33 json_config -- json_config/common.sh@41 -- # kill -0 3334514 00:06:05.165 11:17:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.466 11:17:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.466 11:17:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.466 11:17:34 json_config -- json_config/common.sh@41 -- # kill -0 3334514 00:06:05.466 11:17:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:05.466 11:17:34 json_config -- json_config/common.sh@43 -- # break 00:06:05.466 11:17:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:05.466 11:17:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:05.466 SPDK target shutdown done 00:06:05.466 11:17:34 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:05.466 INFO: relaunching applications... 00:06:05.466 11:17:34 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.466 11:17:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:05.466 11:17:34 json_config -- json_config/common.sh@10 -- # shift 00:06:05.466 11:17:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.466 11:17:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.466 11:17:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.466 11:17:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.466 11:17:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.466 11:17:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3335455 00:06:05.466 11:17:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.466 Waiting for target to run... 00:06:05.466 11:17:34 json_config -- json_config/common.sh@25 -- # waitforlisten 3335455 /var/tmp/spdk_tgt.sock 00:06:05.466 11:17:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.466 11:17:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 3335455 ']' 00:06:05.466 11:17:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.466 11:17:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.466 11:17:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.466 11:17:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.466 11:17:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.726 [2024-07-15 11:17:34.205974] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:05.726 [2024-07-15 11:17:34.206037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335455 ] 00:06:05.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.986 [2024-07-15 11:17:34.549252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.986 [2024-07-15 11:17:34.604682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.557 [2024-07-15 11:17:35.097570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.557 [2024-07-15 11:17:35.129923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.557 11:17:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.557 11:17:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:06.557 11:17:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:06.557 00:06:06.557 11:17:35 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:06.557 11:17:35 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:06.557 INFO: Checking if target configuration is the same... 00:06:06.557 11:17:35 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.557 11:17:35 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:06.557 11:17:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.557 + '[' 2 -ne 2 ']' 00:06:06.557 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:06.557 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:06.557 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:06.557 +++ basename /dev/fd/62 00:06:06.557 ++ mktemp /tmp/62.XXX 00:06:06.557 + tmp_file_1=/tmp/62.w1g 00:06:06.557 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.557 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:06.557 + tmp_file_2=/tmp/spdk_tgt_config.json.3DB 00:06:06.557 + ret=0 00:06:06.557 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.817 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.817 + diff -u /tmp/62.w1g /tmp/spdk_tgt_config.json.3DB 00:06:06.817 + echo 'INFO: JSON config files are the same' 00:06:06.817 INFO: JSON config files are the same 00:06:06.817 + rm /tmp/62.w1g /tmp/spdk_tgt_config.json.3DB 00:06:06.817 + exit 0 00:06:07.076 11:17:35 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:07.076 11:17:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:07.076 INFO: changing configuration and checking if this can be detected... 00:06:07.076 11:17:35 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.077 11:17:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.077 11:17:35 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:07.077 11:17:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.077 11:17:35 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.077 + '[' 2 -ne 2 ']' 00:06:07.077 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.077 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:07.077 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.077 +++ basename /dev/fd/62 00:06:07.077 ++ mktemp /tmp/62.XXX 00:06:07.077 + tmp_file_1=/tmp/62.UKz 00:06:07.077 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.077 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.077 + tmp_file_2=/tmp/spdk_tgt_config.json.gZe 00:06:07.077 + ret=0 00:06:07.077 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.336 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.336 + diff -u /tmp/62.UKz /tmp/spdk_tgt_config.json.gZe 00:06:07.337 + ret=1 00:06:07.337 + echo '=== Start of file: /tmp/62.UKz ===' 00:06:07.337 + cat /tmp/62.UKz 00:06:07.337 + echo '=== End of file: /tmp/62.UKz ===' 00:06:07.337 + echo '' 00:06:07.337 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gZe ===' 00:06:07.337 + cat /tmp/spdk_tgt_config.json.gZe 00:06:07.337 + echo '=== End of file: /tmp/spdk_tgt_config.json.gZe ===' 00:06:07.337 + echo '' 00:06:07.337 + rm /tmp/62.UKz /tmp/spdk_tgt_config.json.gZe 00:06:07.597 + exit 1 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:07.597 INFO: configuration change detected. 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@317 -- # [[ -n 3335455 ]] 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.597 11:17:36 json_config -- json_config/json_config.sh@323 -- # killprocess 3335455 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 3335455 ']' 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@952 -- # kill -0 3335455 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@953 -- # uname 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3335455 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3335455' 00:06:07.597 killing process with pid 3335455 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@967 -- # kill 3335455 00:06:07.597 11:17:36 json_config -- common/autotest_common.sh@972 -- # wait 3335455 00:06:07.858 11:17:36 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.858 11:17:36 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:07.858 11:17:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.858 11:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.858 11:17:36 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:07.858 11:17:36 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:07.858 INFO: Success 00:06:07.858 00:06:07.858 real 0m7.047s 00:06:07.858 user 0m8.474s 00:06:07.858 sys 0m1.793s 00:06:07.858 11:17:36 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.858 11:17:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.858 ************************************ 00:06:07.858 END TEST json_config 00:06:07.858 ************************************ 00:06:07.858 11:17:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.858 11:17:36 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:07.858 11:17:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.858 11:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.858 11:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:07.858 ************************************ 00:06:07.858 START TEST json_config_extra_key 00:06:07.858 ************************************ 00:06:07.858 11:17:36 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.120 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.120 11:17:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.120 11:17:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.120 11:17:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.120 11:17:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.120 11:17:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.120 11:17:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.120 11:17:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.120 11:17:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:08.121 11:17:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.121 11:17:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:08.121 INFO: launching applications... 00:06:08.121 11:17:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3336227 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.121 Waiting for target to run... 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3336227 /var/tmp/spdk_tgt.sock 00:06:08.121 11:17:36 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3336227 ']' 00:06:08.121 11:17:36 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.121 11:17:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.121 11:17:36 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.121 11:17:36 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.121 11:17:36 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.121 11:17:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.121 [2024-07-15 11:17:36.725832] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:08.121 [2024-07-15 11:17:36.725888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336227 ] 00:06:08.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.382 [2024-07-15 11:17:37.016658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.382 [2024-07-15 11:17:37.068294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.954 11:17:37 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.954 11:17:37 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.954 00:06:08.954 11:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.954 INFO: shutting down applications... 00:06:08.954 11:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3336227 ]] 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3336227 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3336227 00:06:08.954 11:17:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3336227 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:09.527 11:17:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:09.527 SPDK target shutdown done 00:06:09.527 11:17:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:09.527 Success 00:06:09.527 00:06:09.527 real 0m1.438s 00:06:09.527 user 0m1.079s 00:06:09.527 sys 0m0.391s 00:06:09.527 11:17:37 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.527 11:17:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:09.527 ************************************ 00:06:09.527 END TEST json_config_extra_key 00:06:09.527 ************************************ 00:06:09.527 11:17:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.527 11:17:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.527 11:17:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.527 11:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.527 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.527 ************************************ 00:06:09.527 START TEST alias_rpc 00:06:09.527 ************************************ 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:09.527 * Looking for test storage... 00:06:09.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:09.527 11:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.527 11:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3336606 00:06:09.527 11:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3336606 00:06:09.527 11:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3336606 ']' 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.527 11:17:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.527 [2024-07-15 11:17:38.224592] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:09.527 [2024-07-15 11:17:38.224650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336606 ] 00:06:09.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.788 [2024-07-15 11:17:38.284755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.788 [2024-07-15 11:17:38.351977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.360 11:17:38 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.360 11:17:38 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.360 11:17:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:10.620 11:17:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3336606 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3336606 ']' 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3336606 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3336606 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3336606' 00:06:10.620 killing process with pid 3336606 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@967 -- # kill 3336606 00:06:10.620 11:17:39 alias_rpc -- common/autotest_common.sh@972 -- # wait 3336606 00:06:10.880 00:06:10.880 real 0m1.347s 00:06:10.880 user 0m1.472s 00:06:10.880 sys 0m0.353s 00:06:10.880 11:17:39 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.880 11:17:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.880 ************************************ 00:06:10.880 END TEST alias_rpc 00:06:10.880 ************************************ 00:06:10.880 11:17:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.880 11:17:39 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:10.880 11:17:39 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:10.880 11:17:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.880 11:17:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.880 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:10.880 ************************************ 00:06:10.880 START TEST spdkcli_tcp 00:06:10.880 ************************************ 00:06:10.880 11:17:39 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.140 * Looking for test storage... 00:06:11.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:11.140 11:17:39 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.140 11:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3336882 00:06:11.140 11:17:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3336882 00:06:11.140 11:17:39 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3336882 ']' 00:06:11.140 11:17:39 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.141 11:17:39 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.141 11:17:39 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.141 11:17:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.141 11:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.141 [2024-07-15 11:17:39.638601] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:11.141 [2024-07-15 11:17:39.638658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336882 ] 00:06:11.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.141 [2024-07-15 11:17:39.697486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.141 [2024-07-15 11:17:39.764157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.141 [2024-07-15 11:17:39.764160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.712 11:17:40 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.712 11:17:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:11.712 11:17:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3337010 00:06:11.712 11:17:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:11.712 11:17:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:11.973 [ 00:06:11.973 "bdev_malloc_delete", 00:06:11.973 "bdev_malloc_create", 00:06:11.973 "bdev_null_resize", 00:06:11.973 "bdev_null_delete", 00:06:11.973 "bdev_null_create", 00:06:11.973 "bdev_nvme_cuse_unregister", 00:06:11.973 "bdev_nvme_cuse_register", 00:06:11.973 "bdev_opal_new_user", 00:06:11.973 "bdev_opal_set_lock_state", 00:06:11.973 "bdev_opal_delete", 00:06:11.973 "bdev_opal_get_info", 00:06:11.973 "bdev_opal_create", 00:06:11.973 "bdev_nvme_opal_revert", 00:06:11.973 "bdev_nvme_opal_init", 00:06:11.973 "bdev_nvme_send_cmd", 00:06:11.973 "bdev_nvme_get_path_iostat", 00:06:11.973 "bdev_nvme_get_mdns_discovery_info", 00:06:11.973 "bdev_nvme_stop_mdns_discovery", 00:06:11.973 "bdev_nvme_start_mdns_discovery", 00:06:11.973 "bdev_nvme_set_multipath_policy", 00:06:11.973 "bdev_nvme_set_preferred_path", 00:06:11.973 "bdev_nvme_get_io_paths", 00:06:11.973 "bdev_nvme_remove_error_injection", 00:06:11.973 "bdev_nvme_add_error_injection", 00:06:11.973 "bdev_nvme_get_discovery_info", 00:06:11.973 "bdev_nvme_stop_discovery", 00:06:11.973 "bdev_nvme_start_discovery", 00:06:11.973 "bdev_nvme_get_controller_health_info", 00:06:11.973 "bdev_nvme_disable_controller", 00:06:11.973 "bdev_nvme_enable_controller", 00:06:11.973 "bdev_nvme_reset_controller", 00:06:11.973 "bdev_nvme_get_transport_statistics", 00:06:11.973 "bdev_nvme_apply_firmware", 00:06:11.973 "bdev_nvme_detach_controller", 00:06:11.973 "bdev_nvme_get_controllers", 00:06:11.973 "bdev_nvme_attach_controller", 00:06:11.973 "bdev_nvme_set_hotplug", 00:06:11.973 "bdev_nvme_set_options", 00:06:11.973 "bdev_passthru_delete", 00:06:11.973 "bdev_passthru_create", 00:06:11.973 "bdev_lvol_set_parent_bdev", 00:06:11.973 "bdev_lvol_set_parent", 00:06:11.973 "bdev_lvol_check_shallow_copy", 00:06:11.973 "bdev_lvol_start_shallow_copy", 00:06:11.973 "bdev_lvol_grow_lvstore", 00:06:11.973 "bdev_lvol_get_lvols", 00:06:11.973 "bdev_lvol_get_lvstores", 00:06:11.973 "bdev_lvol_delete", 00:06:11.973 "bdev_lvol_set_read_only", 00:06:11.973 "bdev_lvol_resize", 00:06:11.973 "bdev_lvol_decouple_parent", 00:06:11.973 "bdev_lvol_inflate", 00:06:11.973 "bdev_lvol_rename", 00:06:11.973 "bdev_lvol_clone_bdev", 00:06:11.973 "bdev_lvol_clone", 00:06:11.973 "bdev_lvol_snapshot", 00:06:11.973 "bdev_lvol_create", 00:06:11.973 "bdev_lvol_delete_lvstore", 00:06:11.973 "bdev_lvol_rename_lvstore", 00:06:11.973 "bdev_lvol_create_lvstore", 00:06:11.973 "bdev_raid_set_options", 00:06:11.973 "bdev_raid_remove_base_bdev", 00:06:11.973 "bdev_raid_add_base_bdev", 00:06:11.973 "bdev_raid_delete", 00:06:11.973 "bdev_raid_create", 00:06:11.973 "bdev_raid_get_bdevs", 00:06:11.973 "bdev_error_inject_error", 00:06:11.973 "bdev_error_delete", 00:06:11.973 "bdev_error_create", 00:06:11.973 "bdev_split_delete", 00:06:11.973 "bdev_split_create", 00:06:11.973 "bdev_delay_delete", 00:06:11.973 "bdev_delay_create", 00:06:11.973 "bdev_delay_update_latency", 00:06:11.973 "bdev_zone_block_delete", 00:06:11.973 "bdev_zone_block_create", 00:06:11.973 "blobfs_create", 00:06:11.973 "blobfs_detect", 00:06:11.973 "blobfs_set_cache_size", 00:06:11.973 "bdev_aio_delete", 00:06:11.973 "bdev_aio_rescan", 00:06:11.973 "bdev_aio_create", 00:06:11.973 "bdev_ftl_set_property", 00:06:11.973 "bdev_ftl_get_properties", 00:06:11.973 "bdev_ftl_get_stats", 00:06:11.973 "bdev_ftl_unmap", 00:06:11.973 "bdev_ftl_unload", 00:06:11.973 "bdev_ftl_delete", 00:06:11.973 "bdev_ftl_load", 00:06:11.973 "bdev_ftl_create", 00:06:11.973 "bdev_virtio_attach_controller", 00:06:11.973 "bdev_virtio_scsi_get_devices", 00:06:11.973 "bdev_virtio_detach_controller", 00:06:11.973 "bdev_virtio_blk_set_hotplug", 00:06:11.973 "bdev_iscsi_delete", 00:06:11.973 "bdev_iscsi_create", 00:06:11.973 "bdev_iscsi_set_options", 00:06:11.973 "accel_error_inject_error", 00:06:11.973 "ioat_scan_accel_module", 00:06:11.973 "dsa_scan_accel_module", 00:06:11.973 "iaa_scan_accel_module", 00:06:11.974 "vfu_virtio_create_scsi_endpoint", 00:06:11.974 "vfu_virtio_scsi_remove_target", 00:06:11.974 "vfu_virtio_scsi_add_target", 00:06:11.974 "vfu_virtio_create_blk_endpoint", 00:06:11.974 "vfu_virtio_delete_endpoint", 00:06:11.974 "keyring_file_remove_key", 00:06:11.974 "keyring_file_add_key", 00:06:11.974 "keyring_linux_set_options", 00:06:11.974 "iscsi_get_histogram", 00:06:11.974 "iscsi_enable_histogram", 00:06:11.974 "iscsi_set_options", 00:06:11.974 "iscsi_get_auth_groups", 00:06:11.974 "iscsi_auth_group_remove_secret", 00:06:11.974 "iscsi_auth_group_add_secret", 00:06:11.974 "iscsi_delete_auth_group", 00:06:11.974 "iscsi_create_auth_group", 00:06:11.974 "iscsi_set_discovery_auth", 00:06:11.974 "iscsi_get_options", 00:06:11.974 "iscsi_target_node_request_logout", 00:06:11.974 "iscsi_target_node_set_redirect", 00:06:11.974 "iscsi_target_node_set_auth", 00:06:11.974 "iscsi_target_node_add_lun", 00:06:11.974 "iscsi_get_stats", 00:06:11.974 "iscsi_get_connections", 00:06:11.974 "iscsi_portal_group_set_auth", 00:06:11.974 "iscsi_start_portal_group", 00:06:11.974 "iscsi_delete_portal_group", 00:06:11.974 "iscsi_create_portal_group", 00:06:11.974 "iscsi_get_portal_groups", 00:06:11.974 "iscsi_delete_target_node", 00:06:11.974 "iscsi_target_node_remove_pg_ig_maps", 00:06:11.974 "iscsi_target_node_add_pg_ig_maps", 00:06:11.974 "iscsi_create_target_node", 00:06:11.974 "iscsi_get_target_nodes", 00:06:11.974 "iscsi_delete_initiator_group", 00:06:11.974 "iscsi_initiator_group_remove_initiators", 00:06:11.974 "iscsi_initiator_group_add_initiators", 00:06:11.974 "iscsi_create_initiator_group", 00:06:11.974 "iscsi_get_initiator_groups", 00:06:11.974 "nvmf_set_crdt", 00:06:11.974 "nvmf_set_config", 00:06:11.974 "nvmf_set_max_subsystems", 00:06:11.974 "nvmf_stop_mdns_prr", 00:06:11.974 "nvmf_publish_mdns_prr", 00:06:11.974 "nvmf_subsystem_get_listeners", 00:06:11.974 "nvmf_subsystem_get_qpairs", 00:06:11.974 "nvmf_subsystem_get_controllers", 00:06:11.974 "nvmf_get_stats", 00:06:11.974 "nvmf_get_transports", 00:06:11.974 "nvmf_create_transport", 00:06:11.974 "nvmf_get_targets", 00:06:11.974 "nvmf_delete_target", 00:06:11.974 "nvmf_create_target", 00:06:11.974 "nvmf_subsystem_allow_any_host", 00:06:11.974 "nvmf_subsystem_remove_host", 00:06:11.974 "nvmf_subsystem_add_host", 00:06:11.974 "nvmf_ns_remove_host", 00:06:11.974 "nvmf_ns_add_host", 00:06:11.974 "nvmf_subsystem_remove_ns", 00:06:11.974 "nvmf_subsystem_add_ns", 00:06:11.974 "nvmf_subsystem_listener_set_ana_state", 00:06:11.974 "nvmf_discovery_get_referrals", 00:06:11.974 "nvmf_discovery_remove_referral", 00:06:11.974 "nvmf_discovery_add_referral", 00:06:11.974 "nvmf_subsystem_remove_listener", 00:06:11.974 "nvmf_subsystem_add_listener", 00:06:11.974 "nvmf_delete_subsystem", 00:06:11.974 "nvmf_create_subsystem", 00:06:11.974 "nvmf_get_subsystems", 00:06:11.974 "env_dpdk_get_mem_stats", 00:06:11.974 "nbd_get_disks", 00:06:11.974 "nbd_stop_disk", 00:06:11.974 "nbd_start_disk", 00:06:11.974 "ublk_recover_disk", 00:06:11.974 "ublk_get_disks", 00:06:11.974 "ublk_stop_disk", 00:06:11.974 "ublk_start_disk", 00:06:11.974 "ublk_destroy_target", 00:06:11.974 "ublk_create_target", 00:06:11.974 "virtio_blk_create_transport", 00:06:11.974 "virtio_blk_get_transports", 00:06:11.974 "vhost_controller_set_coalescing", 00:06:11.974 "vhost_get_controllers", 00:06:11.974 "vhost_delete_controller", 00:06:11.974 "vhost_create_blk_controller", 00:06:11.974 "vhost_scsi_controller_remove_target", 00:06:11.974 "vhost_scsi_controller_add_target", 00:06:11.974 "vhost_start_scsi_controller", 00:06:11.974 "vhost_create_scsi_controller", 00:06:11.974 "thread_set_cpumask", 00:06:11.974 "framework_get_governor", 00:06:11.974 "framework_get_scheduler", 00:06:11.974 "framework_set_scheduler", 00:06:11.974 "framework_get_reactors", 00:06:11.974 "thread_get_io_channels", 00:06:11.974 "thread_get_pollers", 00:06:11.974 "thread_get_stats", 00:06:11.974 "framework_monitor_context_switch", 00:06:11.974 "spdk_kill_instance", 00:06:11.974 "log_enable_timestamps", 00:06:11.974 "log_get_flags", 00:06:11.974 "log_clear_flag", 00:06:11.974 "log_set_flag", 00:06:11.974 "log_get_level", 00:06:11.974 "log_set_level", 00:06:11.974 "log_get_print_level", 00:06:11.974 "log_set_print_level", 00:06:11.974 "framework_enable_cpumask_locks", 00:06:11.974 "framework_disable_cpumask_locks", 00:06:11.974 "framework_wait_init", 00:06:11.974 "framework_start_init", 00:06:11.974 "scsi_get_devices", 00:06:11.974 "bdev_get_histogram", 00:06:11.974 "bdev_enable_histogram", 00:06:11.974 "bdev_set_qos_limit", 00:06:11.974 "bdev_set_qd_sampling_period", 00:06:11.974 "bdev_get_bdevs", 00:06:11.974 "bdev_reset_iostat", 00:06:11.974 "bdev_get_iostat", 00:06:11.974 "bdev_examine", 00:06:11.974 "bdev_wait_for_examine", 00:06:11.974 "bdev_set_options", 00:06:11.974 "notify_get_notifications", 00:06:11.974 "notify_get_types", 00:06:11.974 "accel_get_stats", 00:06:11.974 "accel_set_options", 00:06:11.974 "accel_set_driver", 00:06:11.974 "accel_crypto_key_destroy", 00:06:11.974 "accel_crypto_keys_get", 00:06:11.974 "accel_crypto_key_create", 00:06:11.974 "accel_assign_opc", 00:06:11.974 "accel_get_module_info", 00:06:11.974 "accel_get_opc_assignments", 00:06:11.974 "vmd_rescan", 00:06:11.974 "vmd_remove_device", 00:06:11.974 "vmd_enable", 00:06:11.974 "sock_get_default_impl", 00:06:11.974 "sock_set_default_impl", 00:06:11.974 "sock_impl_set_options", 00:06:11.974 "sock_impl_get_options", 00:06:11.974 "iobuf_get_stats", 00:06:11.974 "iobuf_set_options", 00:06:11.974 "keyring_get_keys", 00:06:11.974 "framework_get_pci_devices", 00:06:11.974 "framework_get_config", 00:06:11.974 "framework_get_subsystems", 00:06:11.974 "vfu_tgt_set_base_path", 00:06:11.974 "trace_get_info", 00:06:11.974 "trace_get_tpoint_group_mask", 00:06:11.974 "trace_disable_tpoint_group", 00:06:11.974 "trace_enable_tpoint_group", 00:06:11.974 "trace_clear_tpoint_mask", 00:06:11.974 "trace_set_tpoint_mask", 00:06:11.974 "spdk_get_version", 00:06:11.974 "rpc_get_methods" 00:06:11.974 ] 00:06:11.974 11:17:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.974 11:17:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:11.974 11:17:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3336882 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3336882 ']' 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3336882 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3336882 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3336882' 00:06:11.974 killing process with pid 3336882 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3336882 00:06:11.974 11:17:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3336882 00:06:12.235 00:06:12.235 real 0m1.391s 00:06:12.235 user 0m2.608s 00:06:12.235 sys 0m0.390s 00:06:12.235 11:17:40 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.235 11:17:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.235 ************************************ 00:06:12.235 END TEST spdkcli_tcp 00:06:12.235 ************************************ 00:06:12.235 11:17:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.235 11:17:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.235 11:17:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.235 11:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.235 11:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.497 ************************************ 00:06:12.497 START TEST dpdk_mem_utility 00:06:12.497 ************************************ 00:06:12.497 11:17:40 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.497 * Looking for test storage... 00:06:12.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:12.497 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:12.497 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3337172 00:06:12.497 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3337172 00:06:12.497 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.497 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3337172 ']' 00:06:12.497 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.497 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.497 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.497 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.497 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.497 [2024-07-15 11:17:41.118227] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:12.497 [2024-07-15 11:17:41.118291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337172 ] 00:06:12.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.497 [2024-07-15 11:17:41.183865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.758 [2024-07-15 11:17:41.258032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.330 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.330 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:13.330 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:13.330 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:13.330 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.330 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.330 { 00:06:13.330 "filename": "/tmp/spdk_mem_dump.txt" 00:06:13.330 } 00:06:13.330 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.330 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:13.330 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:13.330 1 heaps totaling size 814.000000 MiB 00:06:13.330 size: 814.000000 MiB heap id: 0 00:06:13.330 end heaps---------- 00:06:13.330 8 mempools totaling size 598.116089 MiB 00:06:13.330 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:13.330 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:13.330 size: 84.521057 MiB name: bdev_io_3337172 00:06:13.330 size: 51.011292 MiB name: evtpool_3337172 00:06:13.330 size: 50.003479 MiB name: msgpool_3337172 00:06:13.330 size: 21.763794 MiB name: PDU_Pool 00:06:13.330 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:13.330 size: 0.026123 MiB name: Session_Pool 00:06:13.330 end mempools------- 00:06:13.330 6 memzones totaling size 4.142822 MiB 00:06:13.330 size: 1.000366 MiB name: RG_ring_0_3337172 00:06:13.330 size: 1.000366 MiB name: RG_ring_1_3337172 00:06:13.330 size: 1.000366 MiB name: RG_ring_4_3337172 00:06:13.330 size: 1.000366 MiB name: RG_ring_5_3337172 00:06:13.330 size: 0.125366 MiB name: RG_ring_2_3337172 00:06:13.330 size: 0.015991 MiB name: RG_ring_3_3337172 00:06:13.330 end memzones------- 00:06:13.330 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:13.330 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:13.330 list of free elements. size: 12.519348 MiB 00:06:13.330 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:13.330 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:13.330 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:13.330 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:13.330 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:13.330 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:13.330 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:13.330 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:13.330 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:13.331 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:13.331 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:13.331 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:13.331 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:13.331 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:13.331 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:13.331 list of standard malloc elements. size: 199.218079 MiB 00:06:13.331 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:13.331 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:13.331 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:13.331 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:13.331 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:13.331 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:13.331 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:13.331 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:13.331 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:13.331 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:13.331 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:13.331 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:13.331 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:13.331 list of memzone associated elements. size: 602.262573 MiB 00:06:13.331 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:13.331 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:13.331 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:13.331 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:13.331 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:13.331 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3337172_0 00:06:13.331 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:13.331 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3337172_0 00:06:13.331 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:13.331 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3337172_0 00:06:13.331 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:13.331 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:13.331 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:13.331 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:13.331 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:13.331 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3337172 00:06:13.331 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:13.331 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3337172 00:06:13.331 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:13.331 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3337172 00:06:13.331 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:13.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:13.331 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:13.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:13.331 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:13.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:13.331 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:13.331 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:13.331 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:13.331 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3337172 00:06:13.331 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:13.331 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3337172 00:06:13.331 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:13.331 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3337172 00:06:13.331 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:13.331 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3337172 00:06:13.331 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:13.331 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3337172 00:06:13.331 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:13.331 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:13.331 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:13.331 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:13.331 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:13.331 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:13.331 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:13.331 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3337172 00:06:13.331 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:13.331 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:13.331 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:13.331 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:13.331 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:13.331 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3337172 00:06:13.331 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:13.331 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:13.331 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:13.331 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3337172 00:06:13.331 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:13.331 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3337172 00:06:13.331 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:13.331 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:13.331 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:13.331 11:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3337172 00:06:13.331 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3337172 ']' 00:06:13.331 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3337172 00:06:13.331 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:13.331 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.331 11:17:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3337172 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3337172' 00:06:13.592 killing process with pid 3337172 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3337172 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3337172 00:06:13.592 00:06:13.592 real 0m1.293s 00:06:13.592 user 0m1.365s 00:06:13.592 sys 0m0.378s 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.592 11:17:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.592 ************************************ 00:06:13.592 END TEST dpdk_mem_utility 00:06:13.592 ************************************ 00:06:13.592 11:17:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.592 11:17:42 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:13.853 11:17:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.853 11:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.853 11:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.853 ************************************ 00:06:13.853 START TEST event 00:06:13.853 ************************************ 00:06:13.853 11:17:42 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:13.853 * Looking for test storage... 00:06:13.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.853 11:17:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:13.853 11:17:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:13.853 11:17:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.853 11:17:42 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:13.853 11:17:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.853 11:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.853 ************************************ 00:06:13.853 START TEST event_perf 00:06:13.853 ************************************ 00:06:13.853 11:17:42 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.853 Running I/O for 1 seconds...[2024-07-15 11:17:42.489251] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:13.853 [2024-07-15 11:17:42.489364] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337472 ] 00:06:13.853 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.133 [2024-07-15 11:17:42.560103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.133 [2024-07-15 11:17:42.637928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.133 [2024-07-15 11:17:42.638044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.133 [2024-07-15 11:17:42.638188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.133 [2024-07-15 11:17:42.638395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.074 Running I/O for 1 seconds... 00:06:15.074 lcore 0: 178980 00:06:15.074 lcore 1: 178979 00:06:15.074 lcore 2: 178976 00:06:15.074 lcore 3: 178979 00:06:15.074 done. 00:06:15.074 00:06:15.074 real 0m1.225s 00:06:15.074 user 0m4.138s 00:06:15.074 sys 0m0.084s 00:06:15.074 11:17:43 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.074 11:17:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.074 ************************************ 00:06:15.074 END TEST event_perf 00:06:15.074 ************************************ 00:06:15.074 11:17:43 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.074 11:17:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:15.074 11:17:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:15.074 11:17:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.074 11:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.074 ************************************ 00:06:15.074 START TEST event_reactor 00:06:15.074 ************************************ 00:06:15.074 11:17:43 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:15.335 [2024-07-15 11:17:43.789508] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:15.335 [2024-07-15 11:17:43.789624] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337831 ] 00:06:15.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.335 [2024-07-15 11:17:43.858398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.335 [2024-07-15 11:17:43.920848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.276 test_start 00:06:16.276 oneshot 00:06:16.276 tick 100 00:06:16.276 tick 100 00:06:16.276 tick 250 00:06:16.276 tick 100 00:06:16.276 tick 100 00:06:16.276 tick 100 00:06:16.276 tick 250 00:06:16.276 tick 500 00:06:16.276 tick 100 00:06:16.276 tick 100 00:06:16.276 tick 250 00:06:16.276 tick 100 00:06:16.276 tick 100 00:06:16.276 test_end 00:06:16.276 00:06:16.276 real 0m1.208s 00:06:16.276 user 0m1.128s 00:06:16.276 sys 0m0.075s 00:06:16.276 11:17:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.276 11:17:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:16.276 ************************************ 00:06:16.276 END TEST event_reactor 00:06:16.276 ************************************ 00:06:16.537 11:17:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.537 11:17:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.537 11:17:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:16.537 11:17:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.537 11:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.537 ************************************ 00:06:16.537 START TEST event_reactor_perf 00:06:16.537 ************************************ 00:06:16.537 11:17:45 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.537 [2024-07-15 11:17:45.072613] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:16.537 [2024-07-15 11:17:45.072705] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338181 ] 00:06:16.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.537 [2024-07-15 11:17:45.135504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.537 [2024-07-15 11:17:45.200097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.921 test_start 00:06:17.921 test_end 00:06:17.921 Performance: 368598 events per second 00:06:17.921 00:06:17.921 real 0m1.201s 00:06:17.921 user 0m1.132s 00:06:17.921 sys 0m0.065s 00:06:17.921 11:17:46 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.921 11:17:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.921 ************************************ 00:06:17.921 END TEST event_reactor_perf 00:06:17.921 ************************************ 00:06:17.921 11:17:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:17.921 11:17:46 event -- event/event.sh@49 -- # uname -s 00:06:17.921 11:17:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:17.921 11:17:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:17.921 11:17:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.922 11:17:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.922 11:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 ************************************ 00:06:17.922 START TEST event_scheduler 00:06:17.922 ************************************ 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:17.922 * Looking for test storage... 00:06:17.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3338500 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3338500 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3338500 ']' 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 [2024-07-15 11:17:46.462815] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:17.922 [2024-07-15 11:17:46.462863] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338500 ] 00:06:17.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.922 [2024-07-15 11:17:46.508810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.922 [2024-07-15 11:17:46.568136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.922 [2024-07-15 11:17:46.568245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.922 [2024-07-15 11:17:46.568377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.922 [2024-07-15 11:17:46.568378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 [2024-07-15 11:17:46.604780] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:17.922 [2024-07-15 11:17:46.604791] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.922 [2024-07-15 11:17:46.604798] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.922 [2024-07-15 11:17:46.604802] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.922 [2024-07-15 11:17:46.604806] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.922 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.922 11:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 [2024-07-15 11:17:46.659243] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.183 11:17:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.183 11:17:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.183 11:17:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 ************************************ 00:06:18.183 START TEST scheduler_create_thread 00:06:18.183 ************************************ 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 2 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 3 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 4 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 5 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 6 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 7 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 8 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.183 9 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.183 11:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.755 10 00:06:18.755 11:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.755 11:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.755 11:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.755 11:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.137 11:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.137 11:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:20.137 11:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:20.137 11:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.137 11:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.745 11:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.746 11:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:20.746 11:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.746 11:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.686 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.686 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:21.686 11:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:21.686 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.686 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.255 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.255 00:06:22.255 real 0m4.222s 00:06:22.255 user 0m0.020s 00:06:22.255 sys 0m0.011s 00:06:22.255 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.255 11:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.255 ************************************ 00:06:22.255 END TEST scheduler_create_thread 00:06:22.255 ************************************ 00:06:22.255 11:17:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:22.255 11:17:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:22.255 11:17:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3338500 00:06:22.255 11:17:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3338500 ']' 00:06:22.255 11:17:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3338500 00:06:22.514 11:17:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:22.514 11:17:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.514 11:17:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3338500 00:06:22.514 11:17:51 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:22.514 11:17:51 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:22.514 11:17:51 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3338500' 00:06:22.514 killing process with pid 3338500 00:06:22.514 11:17:51 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3338500 00:06:22.514 11:17:51 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3338500 00:06:22.514 [2024-07-15 11:17:51.196450] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.774 00:06:22.774 real 0m5.033s 00:06:22.774 user 0m10.059s 00:06:22.774 sys 0m0.319s 00:06:22.774 11:17:51 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.774 11:17:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.774 ************************************ 00:06:22.774 END TEST event_scheduler 00:06:22.774 ************************************ 00:06:22.774 11:17:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.774 11:17:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.774 11:17:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.774 11:17:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.774 11:17:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.774 11:17:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.774 ************************************ 00:06:22.774 START TEST app_repeat 00:06:22.774 ************************************ 00:06:22.774 11:17:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3339533 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3339533' 00:06:22.774 Process app_repeat pid: 3339533 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.774 11:17:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.775 spdk_app_start Round 0 00:06:22.775 11:17:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3339533 /var/tmp/spdk-nbd.sock 00:06:22.775 11:17:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3339533 ']' 00:06:22.775 11:17:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.775 11:17:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.775 11:17:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.775 11:17:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.775 11:17:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.775 [2024-07-15 11:17:51.475133] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:22.775 [2024-07-15 11:17:51.475180] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339533 ] 00:06:23.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.035 [2024-07-15 11:17:51.532780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.035 [2024-07-15 11:17:51.602135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.035 [2024-07-15 11:17:51.602155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.035 11:17:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.035 11:17:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:23.035 11:17:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.295 Malloc0 00:06:23.295 11:17:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.555 Malloc1 00:06:23.556 11:17:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.556 /dev/nbd0 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.556 1+0 records in 00:06:23.556 1+0 records out 00:06:23.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212177 s, 19.3 MB/s 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.556 11:17:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.556 11:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.816 /dev/nbd1 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.816 1+0 records in 00:06:23.816 1+0 records out 00:06:23.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142719 s, 28.7 MB/s 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:23.816 11:17:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.816 11:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.076 11:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.076 { 00:06:24.076 "nbd_device": "/dev/nbd0", 00:06:24.076 "bdev_name": "Malloc0" 00:06:24.076 }, 00:06:24.076 { 00:06:24.076 "nbd_device": "/dev/nbd1", 00:06:24.076 "bdev_name": "Malloc1" 00:06:24.077 } 00:06:24.077 ]' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.077 { 00:06:24.077 "nbd_device": "/dev/nbd0", 00:06:24.077 "bdev_name": "Malloc0" 00:06:24.077 }, 00:06:24.077 { 00:06:24.077 "nbd_device": "/dev/nbd1", 00:06:24.077 "bdev_name": "Malloc1" 00:06:24.077 } 00:06:24.077 ]' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.077 /dev/nbd1' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.077 /dev/nbd1' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.077 256+0 records in 00:06:24.077 256+0 records out 00:06:24.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115728 s, 90.6 MB/s 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.077 256+0 records in 00:06:24.077 256+0 records out 00:06:24.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157355 s, 66.6 MB/s 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.077 256+0 records in 00:06:24.077 256+0 records out 00:06:24.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174862 s, 60.0 MB/s 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.077 11:17:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.338 11:17:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.338 11:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.598 11:17:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.598 11:17:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.858 11:17:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.858 [2024-07-15 11:17:53.553297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.118 [2024-07-15 11:17:53.617716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.118 [2024-07-15 11:17:53.617719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.118 [2024-07-15 11:17:53.649172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.118 [2024-07-15 11:17:53.649204] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.419 11:17:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.419 11:17:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.419 spdk_app_start Round 1 00:06:28.419 11:17:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3339533 /var/tmp/spdk-nbd.sock 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3339533 ']' 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.419 11:17:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:28.419 11:17:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.419 Malloc0 00:06:28.419 11:17:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.419 Malloc1 00:06:28.419 11:17:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.419 11:17:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.419 /dev/nbd0 00:06:28.419 11:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.419 11:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.419 1+0 records in 00:06:28.419 1+0 records out 00:06:28.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274918 s, 14.9 MB/s 00:06:28.419 11:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.420 11:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.420 11:17:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.420 11:17:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.420 11:17:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.420 11:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.420 11:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.420 11:17:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.680 /dev/nbd1 00:06:28.680 11:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.680 11:17:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.680 11:17:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.680 11:17:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.680 11:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.680 11:17:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.680 11:17:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.680 11:17:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.681 1+0 records in 00:06:28.681 1+0 records out 00:06:28.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269327 s, 15.2 MB/s 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.681 11:17:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.681 11:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.681 11:17:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.681 11:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.681 11:17:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.681 11:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.941 { 00:06:28.941 "nbd_device": "/dev/nbd0", 00:06:28.941 "bdev_name": "Malloc0" 00:06:28.941 }, 00:06:28.941 { 00:06:28.941 "nbd_device": "/dev/nbd1", 00:06:28.941 "bdev_name": "Malloc1" 00:06:28.941 } 00:06:28.941 ]' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.941 { 00:06:28.941 "nbd_device": "/dev/nbd0", 00:06:28.941 "bdev_name": "Malloc0" 00:06:28.941 }, 00:06:28.941 { 00:06:28.941 "nbd_device": "/dev/nbd1", 00:06:28.941 "bdev_name": "Malloc1" 00:06:28.941 } 00:06:28.941 ]' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.941 /dev/nbd1' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.941 /dev/nbd1' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.941 256+0 records in 00:06:28.941 256+0 records out 00:06:28.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124771 s, 84.0 MB/s 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.941 256+0 records in 00:06:28.941 256+0 records out 00:06:28.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156869 s, 66.8 MB/s 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.941 256+0 records in 00:06:28.941 256+0 records out 00:06:28.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172205 s, 60.9 MB/s 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.941 11:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.202 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.463 11:17:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.463 11:17:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.463 11:17:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.723 11:17:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.723 [2024-07-15 11:17:58.413788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.984 [2024-07-15 11:17:58.476706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.984 [2024-07-15 11:17:58.476710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.984 [2024-07-15 11:17:58.508895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.984 [2024-07-15 11:17:58.508929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.283 11:18:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.283 11:18:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:33.283 spdk_app_start Round 2 00:06:33.283 11:18:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3339533 /var/tmp/spdk-nbd.sock 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3339533 ']' 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:33.283 11:18:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.283 Malloc0 00:06:33.283 11:18:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.283 Malloc1 00:06:33.283 11:18:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.283 /dev/nbd0 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.283 1+0 records in 00:06:33.283 1+0 records out 00:06:33.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216069 s, 19.0 MB/s 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.283 11:18:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.283 11:18:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.545 /dev/nbd1 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.545 1+0 records in 00:06:33.545 1+0 records out 00:06:33.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301382 s, 13.6 MB/s 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.545 11:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.545 11:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.806 { 00:06:33.806 "nbd_device": "/dev/nbd0", 00:06:33.806 "bdev_name": "Malloc0" 00:06:33.806 }, 00:06:33.806 { 00:06:33.806 "nbd_device": "/dev/nbd1", 00:06:33.806 "bdev_name": "Malloc1" 00:06:33.806 } 00:06:33.806 ]' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.806 { 00:06:33.806 "nbd_device": "/dev/nbd0", 00:06:33.806 "bdev_name": "Malloc0" 00:06:33.806 }, 00:06:33.806 { 00:06:33.806 "nbd_device": "/dev/nbd1", 00:06:33.806 "bdev_name": "Malloc1" 00:06:33.806 } 00:06:33.806 ]' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.806 /dev/nbd1' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.806 /dev/nbd1' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.806 256+0 records in 00:06:33.806 256+0 records out 00:06:33.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125018 s, 83.9 MB/s 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.806 256+0 records in 00:06:33.806 256+0 records out 00:06:33.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015607 s, 67.2 MB/s 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.806 256+0 records in 00:06:33.806 256+0 records out 00:06:33.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177061 s, 59.2 MB/s 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.806 11:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.066 11:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.326 11:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.326 11:18:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.326 11:18:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.587 11:18:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.847 [2024-07-15 11:18:03.309603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.847 [2024-07-15 11:18:03.372929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.847 [2024-07-15 11:18:03.372932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.847 [2024-07-15 11:18:03.404374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.847 [2024-07-15 11:18:03.404408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.145 11:18:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3339533 /var/tmp/spdk-nbd.sock 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3339533 ']' 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.145 11:18:06 event.app_repeat -- event/event.sh@39 -- # killprocess 3339533 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3339533 ']' 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3339533 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3339533 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3339533' 00:06:38.145 killing process with pid 3339533 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3339533 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3339533 00:06:38.145 spdk_app_start is called in Round 0. 00:06:38.145 Shutdown signal received, stop current app iteration 00:06:38.145 Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 reinitialization... 00:06:38.145 spdk_app_start is called in Round 1. 00:06:38.145 Shutdown signal received, stop current app iteration 00:06:38.145 Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 reinitialization... 00:06:38.145 spdk_app_start is called in Round 2. 00:06:38.145 Shutdown signal received, stop current app iteration 00:06:38.145 Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 reinitialization... 00:06:38.145 spdk_app_start is called in Round 3. 00:06:38.145 Shutdown signal received, stop current app iteration 00:06:38.145 11:18:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.145 11:18:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.145 00:06:38.145 real 0m15.054s 00:06:38.145 user 0m32.519s 00:06:38.145 sys 0m2.062s 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.145 11:18:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.145 ************************************ 00:06:38.145 END TEST app_repeat 00:06:38.145 ************************************ 00:06:38.145 11:18:06 event -- common/autotest_common.sh@1142 -- # return 0 00:06:38.145 11:18:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.145 11:18:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:38.145 11:18:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.145 11:18:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.145 11:18:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.145 ************************************ 00:06:38.145 START TEST cpu_locks 00:06:38.145 ************************************ 00:06:38.145 11:18:06 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:38.145 * Looking for test storage... 00:06:38.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:38.145 11:18:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.145 11:18:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.145 11:18:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.145 11:18:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.145 11:18:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.145 11:18:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.145 11:18:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.145 ************************************ 00:06:38.145 START TEST default_locks 00:06:38.145 ************************************ 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3342931 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3342931 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3342931 ']' 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.145 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.145 [2024-07-15 11:18:06.788061] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:38.145 [2024-07-15 11:18:06.788133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342931 ] 00:06:38.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.410 [2024-07-15 11:18:06.851210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.410 [2024-07-15 11:18:06.925708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3342931 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3342931 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.047 lslocks: write error 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3342931 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3342931 ']' 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3342931 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.047 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3342931 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3342931' 00:06:39.308 killing process with pid 3342931 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3342931 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3342931 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3342931 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:39.308 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3342931 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3342931 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3342931 ']' 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3342931) - No such process 00:06:39.308 ERROR: process (pid: 3342931) is no longer running 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.308 00:06:39.308 real 0m1.278s 00:06:39.308 user 0m1.373s 00:06:39.308 sys 0m0.392s 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.308 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.308 ************************************ 00:06:39.308 END TEST default_locks 00:06:39.308 ************************************ 00:06:39.568 11:18:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.568 11:18:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:39.568 11:18:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.568 11:18:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.568 11:18:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.568 ************************************ 00:06:39.568 START TEST default_locks_via_rpc 00:06:39.568 ************************************ 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3343116 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3343116 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3343116 ']' 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.568 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.568 [2024-07-15 11:18:08.141349] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:39.568 [2024-07-15 11:18:08.141404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343116 ] 00:06:39.568 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.568 [2024-07-15 11:18:08.203021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.827 [2024-07-15 11:18:08.275793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3343116 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3343116 00:06:40.397 11:18:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.658 11:18:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3343116 00:06:40.658 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3343116 ']' 00:06:40.658 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3343116 00:06:40.658 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:40.658 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.658 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3343116 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3343116' 00:06:40.918 killing process with pid 3343116 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3343116 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3343116 00:06:40.918 00:06:40.918 real 0m1.499s 00:06:40.918 user 0m1.585s 00:06:40.918 sys 0m0.483s 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.918 11:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.918 ************************************ 00:06:40.918 END TEST default_locks_via_rpc 00:06:40.918 ************************************ 00:06:41.179 11:18:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.179 11:18:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:41.179 11:18:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.179 11:18:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.179 11:18:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.179 ************************************ 00:06:41.179 START TEST non_locking_app_on_locked_coremask 00:06:41.179 ************************************ 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3343648 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3343648 /var/tmp/spdk.sock 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3343648 ']' 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.179 11:18:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.179 [2024-07-15 11:18:09.716321] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:41.179 [2024-07-15 11:18:09.716377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343648 ] 00:06:41.179 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.179 [2024-07-15 11:18:09.778197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.179 [2024-07-15 11:18:09.849488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3344061 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3344061 /var/tmp/spdk2.sock 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3344061 ']' 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.120 11:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.120 [2024-07-15 11:18:10.534423] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:42.120 [2024-07-15 11:18:10.534476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344061 ] 00:06:42.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.120 [2024-07-15 11:18:10.621193] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.120 [2024-07-15 11:18:10.621220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.120 [2024-07-15 11:18:10.750284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.690 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.690 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.690 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3343648 00:06:42.690 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3343648 00:06:42.690 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.261 lslocks: write error 00:06:43.261 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3343648 00:06:43.261 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3343648 ']' 00:06:43.261 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3343648 00:06:43.261 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3343648 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3343648' 00:06:43.262 killing process with pid 3343648 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3343648 00:06:43.262 11:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3343648 00:06:43.522 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3344061 00:06:43.522 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3344061 ']' 00:06:43.522 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3344061 00:06:43.522 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.522 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.522 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3344061 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3344061' 00:06:43.782 killing process with pid 3344061 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3344061 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3344061 00:06:43.782 00:06:43.782 real 0m2.785s 00:06:43.782 user 0m3.046s 00:06:43.782 sys 0m0.815s 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.782 11:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.782 ************************************ 00:06:43.782 END TEST non_locking_app_on_locked_coremask 00:06:43.782 ************************************ 00:06:44.041 11:18:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.041 11:18:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:44.041 11:18:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.041 11:18:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.041 11:18:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.041 ************************************ 00:06:44.041 START TEST locking_app_on_unlocked_coremask 00:06:44.041 ************************************ 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3344552 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3344552 /var/tmp/spdk.sock 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3344552 ']' 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.041 11:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.041 [2024-07-15 11:18:12.576500] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:44.041 [2024-07-15 11:18:12.576553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344552 ] 00:06:44.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.041 [2024-07-15 11:18:12.637093] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.041 [2024-07-15 11:18:12.637130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.041 [2024-07-15 11:18:12.707820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3344737 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3344737 /var/tmp/spdk2.sock 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3344737 ']' 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.978 11:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.978 [2024-07-15 11:18:13.372328] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:44.978 [2024-07-15 11:18:13.372378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344737 ] 00:06:44.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.978 [2024-07-15 11:18:13.460324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.978 [2024-07-15 11:18:13.589769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.548 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.548 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:45.548 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3344737 00:06:45.548 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3344737 00:06:45.548 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.116 lslocks: write error 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3344552 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3344552 ']' 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3344552 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3344552 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3344552' 00:06:46.116 killing process with pid 3344552 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3344552 00:06:46.116 11:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3344552 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3344737 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3344737 ']' 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3344737 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3344737 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3344737' 00:06:46.685 killing process with pid 3344737 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3344737 00:06:46.685 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3344737 00:06:46.946 00:06:46.946 real 0m2.961s 00:06:46.946 user 0m3.207s 00:06:46.946 sys 0m0.899s 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.946 ************************************ 00:06:46.946 END TEST locking_app_on_unlocked_coremask 00:06:46.946 ************************************ 00:06:46.946 11:18:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:46.946 11:18:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.946 11:18:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.946 11:18:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.946 11:18:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.946 ************************************ 00:06:46.946 START TEST locking_app_on_locked_coremask 00:06:46.946 ************************************ 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3345258 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3345258 /var/tmp/spdk.sock 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3345258 ']' 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.946 11:18:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.946 [2024-07-15 11:18:15.608383] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:46.946 [2024-07-15 11:18:15.608429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345258 ] 00:06:46.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.206 [2024-07-15 11:18:15.667046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.206 [2024-07-15 11:18:15.730318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3345293 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3345293 /var/tmp/spdk2.sock 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3345293 /var/tmp/spdk2.sock 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3345293 /var/tmp/spdk2.sock 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3345293 ']' 00:06:47.778 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.779 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.779 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.779 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.779 11:18:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.779 [2024-07-15 11:18:16.422944] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:47.779 [2024-07-15 11:18:16.422997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345293 ] 00:06:47.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.040 [2024-07-15 11:18:16.510218] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3345258 has claimed it. 00:06:48.040 [2024-07-15 11:18:16.510261] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3345293) - No such process 00:06:48.611 ERROR: process (pid: 3345293) is no longer running 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3345258 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3345258 00:06:48.611 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.871 lslocks: write error 00:06:48.871 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3345258 00:06:48.871 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3345258 ']' 00:06:48.871 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3345258 00:06:48.871 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.871 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.871 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3345258 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3345258' 00:06:49.130 killing process with pid 3345258 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3345258 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3345258 00:06:49.130 00:06:49.130 real 0m2.238s 00:06:49.130 user 0m2.487s 00:06:49.130 sys 0m0.618s 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.130 11:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.130 ************************************ 00:06:49.130 END TEST locking_app_on_locked_coremask 00:06:49.130 ************************************ 00:06:49.130 11:18:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.130 11:18:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.130 11:18:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.130 11:18:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.130 11:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.391 ************************************ 00:06:49.391 START TEST locking_overlapped_coremask 00:06:49.391 ************************************ 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3345637 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3345637 /var/tmp/spdk.sock 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3345637 ']' 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.391 11:18:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.391 [2024-07-15 11:18:17.930785] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:49.391 [2024-07-15 11:18:17.930831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345637 ] 00:06:49.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.391 [2024-07-15 11:18:17.990072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.391 [2024-07-15 11:18:18.056477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.391 [2024-07-15 11:18:18.056618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.391 [2024-07-15 11:18:18.056621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3345966 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3345966 /var/tmp/spdk2.sock 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3345966 /var/tmp/spdk2.sock 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3345966 /var/tmp/spdk2.sock 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3345966 ']' 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.332 11:18:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.332 [2024-07-15 11:18:18.738614] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:50.332 [2024-07-15 11:18:18.738667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345966 ] 00:06:50.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.332 [2024-07-15 11:18:18.809557] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3345637 has claimed it. 00:06:50.332 [2024-07-15 11:18:18.809591] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3345966) - No such process 00:06:50.903 ERROR: process (pid: 3345966) is no longer running 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3345637 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3345637 ']' 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3345637 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3345637 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3345637' 00:06:50.903 killing process with pid 3345637 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3345637 00:06:50.903 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3345637 00:06:51.165 00:06:51.165 real 0m1.746s 00:06:51.165 user 0m4.933s 00:06:51.165 sys 0m0.360s 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.165 ************************************ 00:06:51.165 END TEST locking_overlapped_coremask 00:06:51.165 ************************************ 00:06:51.165 11:18:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.165 11:18:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.165 11:18:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.165 11:18:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.165 11:18:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.165 ************************************ 00:06:51.165 START TEST locking_overlapped_coremask_via_rpc 00:06:51.165 ************************************ 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3346017 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3346017 /var/tmp/spdk.sock 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3346017 ']' 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.165 11:18:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.165 [2024-07-15 11:18:19.738219] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:51.165 [2024-07-15 11:18:19.738269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346017 ] 00:06:51.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.165 [2024-07-15 11:18:19.799743] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.165 [2024-07-15 11:18:19.799779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.426 [2024-07-15 11:18:19.869532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.426 [2024-07-15 11:18:19.869646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.426 [2024-07-15 11:18:19.869649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.997 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.997 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.997 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3346346 00:06:51.997 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3346346 /var/tmp/spdk2.sock 00:06:51.997 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3346346 ']' 00:06:51.998 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.998 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.998 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.998 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.998 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.998 11:18:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.998 [2024-07-15 11:18:20.564599] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:51.998 [2024-07-15 11:18:20.564654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346346 ] 00:06:51.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.998 [2024-07-15 11:18:20.634607] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.998 [2024-07-15 11:18:20.634632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.257 [2024-07-15 11:18:20.743635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.257 [2024-07-15 11:18:20.743754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.257 [2024-07-15 11:18:20.743755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 [2024-07-15 11:18:21.346184] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3346017 has claimed it. 00:06:52.828 request: 00:06:52.828 { 00:06:52.828 "method": "framework_enable_cpumask_locks", 00:06:52.828 "req_id": 1 00:06:52.828 } 00:06:52.828 Got JSON-RPC error response 00:06:52.828 response: 00:06:52.828 { 00:06:52.828 "code": -32603, 00:06:52.828 "message": "Failed to claim CPU core: 2" 00:06:52.828 } 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3346017 /var/tmp/spdk.sock 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3346017 ']' 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3346346 /var/tmp/spdk2.sock 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3346346 ']' 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.828 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.089 00:06:53.089 real 0m2.014s 00:06:53.089 user 0m0.775s 00:06:53.089 sys 0m0.153s 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.089 11:18:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.089 ************************************ 00:06:53.089 END TEST locking_overlapped_coremask_via_rpc 00:06:53.089 ************************************ 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.089 11:18:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.089 11:18:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3346017 ]] 00:06:53.089 11:18:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3346017 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3346017 ']' 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3346017 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3346017 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3346017' 00:06:53.089 killing process with pid 3346017 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3346017 00:06:53.089 11:18:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3346017 00:06:53.352 11:18:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3346346 ]] 00:06:53.352 11:18:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3346346 00:06:53.352 11:18:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3346346 ']' 00:06:53.352 11:18:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3346346 00:06:53.352 11:18:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:53.352 11:18:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.352 11:18:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3346346 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3346346' 00:06:53.678 killing process with pid 3346346 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3346346 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3346346 00:06:53.678 11:18:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.678 11:18:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.678 11:18:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3346017 ]] 00:06:53.678 11:18:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3346017 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3346017 ']' 00:06:53.678 11:18:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3346017 00:06:53.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3346017) - No such process 00:06:53.679 11:18:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3346017 is not found' 00:06:53.679 Process with pid 3346017 is not found 00:06:53.679 11:18:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3346346 ]] 00:06:53.679 11:18:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3346346 00:06:53.679 11:18:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3346346 ']' 00:06:53.679 11:18:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3346346 00:06:53.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3346346) - No such process 00:06:53.679 11:18:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3346346 is not found' 00:06:53.679 Process with pid 3346346 is not found 00:06:53.679 11:18:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.679 00:06:53.679 real 0m15.671s 00:06:53.679 user 0m27.009s 00:06:53.679 sys 0m4.605s 00:06:53.679 11:18:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.679 11:18:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.679 ************************************ 00:06:53.679 END TEST cpu_locks 00:06:53.679 ************************************ 00:06:53.679 11:18:22 event -- common/autotest_common.sh@1142 -- # return 0 00:06:53.679 00:06:53.679 real 0m39.965s 00:06:53.679 user 1m16.200s 00:06:53.679 sys 0m7.597s 00:06:53.679 11:18:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.679 11:18:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.679 ************************************ 00:06:53.679 END TEST event 00:06:53.679 ************************************ 00:06:53.679 11:18:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.679 11:18:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.679 11:18:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.679 11:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.679 11:18:22 -- common/autotest_common.sh@10 -- # set +x 00:06:53.679 ************************************ 00:06:53.679 START TEST thread 00:06:53.679 ************************************ 00:06:53.679 11:18:22 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.939 * Looking for test storage... 00:06:53.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.939 11:18:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.939 11:18:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:53.939 11:18:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.939 11:18:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.939 ************************************ 00:06:53.939 START TEST thread_poller_perf 00:06:53.939 ************************************ 00:06:53.939 11:18:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.939 [2024-07-15 11:18:22.526752] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:53.939 [2024-07-15 11:18:22.526856] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346786 ] 00:06:53.939 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.939 [2024-07-15 11:18:22.590615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.199 [2024-07-15 11:18:22.660304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.199 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.146 ====================================== 00:06:55.146 busy:2408494372 (cyc) 00:06:55.146 total_run_count: 287000 00:06:55.146 tsc_hz: 2400000000 (cyc) 00:06:55.146 ====================================== 00:06:55.146 poller_cost: 8391 (cyc), 3496 (nsec) 00:06:55.146 00:06:55.146 real 0m1.216s 00:06:55.146 user 0m1.146s 00:06:55.146 sys 0m0.066s 00:06:55.146 11:18:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.146 11:18:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.146 ************************************ 00:06:55.146 END TEST thread_poller_perf 00:06:55.146 ************************************ 00:06:55.146 11:18:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:55.146 11:18:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.146 11:18:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:55.146 11:18:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.146 11:18:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.146 ************************************ 00:06:55.146 START TEST thread_poller_perf 00:06:55.146 ************************************ 00:06:55.146 11:18:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.146 [2024-07-15 11:18:23.821819] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:55.146 [2024-07-15 11:18:23.821921] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347135 ] 00:06:55.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.406 [2024-07-15 11:18:23.884832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.406 [2024-07-15 11:18:23.950619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.406 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.348 ====================================== 00:06:56.348 busy:2402105646 (cyc) 00:06:56.348 total_run_count: 3809000 00:06:56.348 tsc_hz: 2400000000 (cyc) 00:06:56.348 ====================================== 00:06:56.348 poller_cost: 630 (cyc), 262 (nsec) 00:06:56.348 00:06:56.348 real 0m1.205s 00:06:56.348 user 0m1.129s 00:06:56.348 sys 0m0.072s 00:06:56.348 11:18:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.348 11:18:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.348 ************************************ 00:06:56.348 END TEST thread_poller_perf 00:06:56.348 ************************************ 00:06:56.348 11:18:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:56.348 11:18:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.348 00:06:56.348 real 0m2.675s 00:06:56.348 user 0m2.383s 00:06:56.348 sys 0m0.297s 00:06:56.348 11:18:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.348 11:18:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.348 ************************************ 00:06:56.348 END TEST thread 00:06:56.349 ************************************ 00:06:56.632 11:18:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.632 11:18:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.632 11:18:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.632 11:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.632 11:18:25 -- common/autotest_common.sh@10 -- # set +x 00:06:56.632 ************************************ 00:06:56.632 START TEST accel 00:06:56.632 ************************************ 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.632 * Looking for test storage... 00:06:56.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:56.632 11:18:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:56.632 11:18:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:56.632 11:18:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.632 11:18:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3347414 00:06:56.632 11:18:25 accel -- accel/accel.sh@63 -- # waitforlisten 3347414 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@829 -- # '[' -z 3347414 ']' 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.632 11:18:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.632 11:18:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:56.632 11:18:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.632 11:18:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.632 11:18:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.632 11:18:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.632 11:18:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.632 11:18:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.632 11:18:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:56.632 11:18:25 accel -- accel/accel.sh@41 -- # jq -r . 00:06:56.632 [2024-07-15 11:18:25.290686] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:56.633 [2024-07-15 11:18:25.290754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347414 ] 00:06:56.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.893 [2024-07-15 11:18:25.356197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.893 [2024-07-15 11:18:25.431824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.462 11:18:26 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.462 11:18:26 accel -- common/autotest_common.sh@862 -- # return 0 00:06:57.462 11:18:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:57.462 11:18:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:57.462 11:18:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:57.462 11:18:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:57.462 11:18:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:57.462 11:18:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:57.462 11:18:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:57.462 11:18:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.462 11:18:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.462 11:18:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.462 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.462 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.462 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.462 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.462 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.462 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.462 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.462 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.462 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.462 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.463 11:18:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.463 11:18:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.463 11:18:26 accel -- accel/accel.sh@75 -- # killprocess 3347414 00:06:57.463 11:18:26 accel -- common/autotest_common.sh@948 -- # '[' -z 3347414 ']' 00:06:57.463 11:18:26 accel -- common/autotest_common.sh@952 -- # kill -0 3347414 00:06:57.463 11:18:26 accel -- common/autotest_common.sh@953 -- # uname 00:06:57.463 11:18:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.463 11:18:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3347414 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3347414' 00:06:57.723 killing process with pid 3347414 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@967 -- # kill 3347414 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@972 -- # wait 3347414 00:06:57.723 11:18:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:57.723 11:18:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.723 11:18:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.723 11:18:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:57.723 11:18:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:57.984 11:18:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.984 11:18:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:57.984 11:18:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.984 11:18:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.984 11:18:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.984 11:18:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.984 11:18:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.984 ************************************ 00:06:57.984 START TEST accel_missing_filename 00:06:57.984 ************************************ 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.984 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:57.984 11:18:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:57.984 [2024-07-15 11:18:26.545149] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:57.984 [2024-07-15 11:18:26.545274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347599 ] 00:06:57.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.984 [2024-07-15 11:18:26.615015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.984 [2024-07-15 11:18:26.680475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.246 [2024-07-15 11:18:26.712357] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.246 [2024-07-15 11:18:26.749438] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:58.246 A filename is required. 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.246 00:06:58.246 real 0m0.290s 00:06:58.246 user 0m0.221s 00:06:58.246 sys 0m0.110s 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.246 11:18:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:58.246 ************************************ 00:06:58.246 END TEST accel_missing_filename 00:06:58.246 ************************************ 00:06:58.246 11:18:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.246 11:18:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.246 11:18:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.246 11:18:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.246 11:18:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.246 ************************************ 00:06:58.246 START TEST accel_compress_verify 00:06:58.246 ************************************ 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.246 11:18:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:58.246 11:18:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:58.246 [2024-07-15 11:18:26.910129] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:58.246 [2024-07-15 11:18:26.910194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347786 ] 00:06:58.246 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.507 [2024-07-15 11:18:26.972023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.507 [2024-07-15 11:18:27.037691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.507 [2024-07-15 11:18:27.069707] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.507 [2024-07-15 11:18:27.107042] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:58.507 00:06:58.507 Compression does not support the verify option, aborting. 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.507 00:06:58.507 real 0m0.281s 00:06:58.507 user 0m0.219s 00:06:58.507 sys 0m0.106s 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.507 11:18:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.507 ************************************ 00:06:58.507 END TEST accel_compress_verify 00:06:58.507 ************************************ 00:06:58.507 11:18:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.507 11:18:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.507 11:18:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.507 11:18:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.507 11:18:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.769 ************************************ 00:06:58.769 START TEST accel_wrong_workload 00:06:58.769 ************************************ 00:06:58.769 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:58.769 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:58.769 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.769 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.769 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.769 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:58.770 11:18:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:58.770 Unsupported workload type: foobar 00:06:58.770 [2024-07-15 11:18:27.265078] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.770 accel_perf options: 00:06:58.770 [-h help message] 00:06:58.770 [-q queue depth per core] 00:06:58.770 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.770 [-T number of threads per core 00:06:58.770 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.770 [-t time in seconds] 00:06:58.770 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.770 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.770 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.770 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.770 [-S for crc32c workload, use this seed value (default 0) 00:06:58.770 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.770 [-f for fill workload, use this BYTE value (default 255) 00:06:58.770 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.770 [-y verify result if this switch is on] 00:06:58.770 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.770 Can be used to spread operations across a wider range of memory. 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.770 00:06:58.770 real 0m0.036s 00:06:58.770 user 0m0.018s 00:06:58.770 sys 0m0.018s 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.770 11:18:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:58.770 ************************************ 00:06:58.770 END TEST accel_wrong_workload 00:06:58.770 ************************************ 00:06:58.770 Error: writing output failed: Broken pipe 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.770 11:18:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.770 ************************************ 00:06:58.770 START TEST accel_negative_buffers 00:06:58.770 ************************************ 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:58.770 11:18:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:58.770 -x option must be non-negative. 00:06:58.770 [2024-07-15 11:18:27.375336] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:58.770 accel_perf options: 00:06:58.770 [-h help message] 00:06:58.770 [-q queue depth per core] 00:06:58.770 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.770 [-T number of threads per core 00:06:58.770 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.770 [-t time in seconds] 00:06:58.770 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.770 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.770 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.770 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.770 [-S for crc32c workload, use this seed value (default 0) 00:06:58.770 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.770 [-f for fill workload, use this BYTE value (default 255) 00:06:58.770 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.770 [-y verify result if this switch is on] 00:06:58.770 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.770 Can be used to spread operations across a wider range of memory. 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.770 00:06:58.770 real 0m0.034s 00:06:58.770 user 0m0.022s 00:06:58.770 sys 0m0.012s 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.770 11:18:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:58.770 ************************************ 00:06:58.770 END TEST accel_negative_buffers 00:06:58.770 ************************************ 00:06:58.770 Error: writing output failed: Broken pipe 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.770 11:18:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.770 11:18:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.770 ************************************ 00:06:58.770 START TEST accel_crc32c 00:06:58.770 ************************************ 00:06:58.770 11:18:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:58.770 11:18:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.032 [2024-07-15 11:18:27.482329] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:06:59.032 [2024-07-15 11:18:27.482391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347984 ] 00:06:59.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.032 [2024-07-15 11:18:27.542663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.032 [2024-07-15 11:18:27.606630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.032 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.033 11:18:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.418 11:18:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.418 00:07:00.418 real 0m1.281s 00:07:00.418 user 0m1.188s 00:07:00.418 sys 0m0.105s 00:07:00.418 11:18:28 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.418 11:18:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.418 ************************************ 00:07:00.418 END TEST accel_crc32c 00:07:00.418 ************************************ 00:07:00.418 11:18:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.418 11:18:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:00.418 11:18:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.418 11:18:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.418 11:18:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.418 ************************************ 00:07:00.418 START TEST accel_crc32c_C2 00:07:00.418 ************************************ 00:07:00.418 11:18:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:00.418 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.418 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.418 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.419 11:18:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.419 [2024-07-15 11:18:28.842439] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:00.419 [2024-07-15 11:18:28.842531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348339 ] 00:07:00.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.419 [2024-07-15 11:18:28.904232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.419 [2024-07-15 11:18:28.972544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.419 11:18:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.805 00:07:01.805 real 0m1.292s 00:07:01.805 user 0m1.202s 00:07:01.805 sys 0m0.102s 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.805 11:18:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:01.805 ************************************ 00:07:01.805 END TEST accel_crc32c_C2 00:07:01.805 ************************************ 00:07:01.805 11:18:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.805 11:18:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:01.805 11:18:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.805 11:18:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.805 11:18:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.805 ************************************ 00:07:01.805 START TEST accel_copy 00:07:01.805 ************************************ 00:07:01.805 11:18:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:01.805 [2024-07-15 11:18:30.206316] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:01.805 [2024-07-15 11:18:30.206377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348537 ] 00:07:01.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.805 [2024-07-15 11:18:30.266723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.805 [2024-07-15 11:18:30.331862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.805 11:18:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:03.191 11:18:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.191 00:07:03.191 real 0m1.285s 00:07:03.191 user 0m1.191s 00:07:03.191 sys 0m0.105s 00:07:03.191 11:18:31 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.191 11:18:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.191 ************************************ 00:07:03.191 END TEST accel_copy 00:07:03.191 ************************************ 00:07:03.191 11:18:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.191 11:18:31 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.191 11:18:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:03.191 11:18:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.191 11:18:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.191 ************************************ 00:07:03.191 START TEST accel_fill 00:07:03.191 ************************************ 00:07:03.191 11:18:31 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:03.191 [2024-07-15 11:18:31.567184] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:03.191 [2024-07-15 11:18:31.567277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348733 ] 00:07:03.191 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.191 [2024-07-15 11:18:31.629523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.191 [2024-07-15 11:18:31.695891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.191 11:18:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:04.135 11:18:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.135 00:07:04.135 real 0m1.287s 00:07:04.135 user 0m1.197s 00:07:04.135 sys 0m0.102s 00:07:04.135 11:18:32 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.135 11:18:32 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:04.135 ************************************ 00:07:04.135 END TEST accel_fill 00:07:04.135 ************************************ 00:07:04.396 11:18:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.396 11:18:32 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:04.396 11:18:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.396 11:18:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.396 11:18:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.396 ************************************ 00:07:04.396 START TEST accel_copy_crc32c 00:07:04.396 ************************************ 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:04.396 11:18:32 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:04.396 [2024-07-15 11:18:32.929058] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:04.396 [2024-07-15 11:18:32.929136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349078 ] 00:07:04.396 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.396 [2024-07-15 11:18:32.990862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.396 [2024-07-15 11:18:33.059780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.396 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.657 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.658 11:18:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.599 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.600 00:07:05.600 real 0m1.287s 00:07:05.600 user 0m1.193s 00:07:05.600 sys 0m0.105s 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.600 11:18:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.600 ************************************ 00:07:05.600 END TEST accel_copy_crc32c 00:07:05.600 ************************************ 00:07:05.600 11:18:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.600 11:18:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.600 11:18:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:05.600 11:18:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.600 11:18:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.600 ************************************ 00:07:05.600 START TEST accel_copy_crc32c_C2 00:07:05.600 ************************************ 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.600 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.600 [2024-07-15 11:18:34.294824] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:05.600 [2024-07-15 11:18:34.294930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349427 ] 00:07:05.861 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.861 [2024-07-15 11:18:34.363138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.861 [2024-07-15 11:18:34.434277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.861 11:18:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.246 00:07:07.246 real 0m1.300s 00:07:07.246 user 0m1.199s 00:07:07.246 sys 0m0.113s 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.246 11:18:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:07.246 ************************************ 00:07:07.246 END TEST accel_copy_crc32c_C2 00:07:07.246 ************************************ 00:07:07.246 11:18:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.246 11:18:35 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:07.246 11:18:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.246 11:18:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.246 11:18:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.246 ************************************ 00:07:07.246 START TEST accel_dualcast 00:07:07.246 ************************************ 00:07:07.246 11:18:35 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:07.246 [2024-07-15 11:18:35.667561] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:07.246 [2024-07-15 11:18:35.667643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349776 ] 00:07:07.246 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.246 [2024-07-15 11:18:35.732499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.246 [2024-07-15 11:18:35.801596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.246 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.247 11:18:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:08.632 11:18:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.632 00:07:08.632 real 0m1.291s 00:07:08.632 user 0m1.196s 00:07:08.632 sys 0m0.106s 00:07:08.632 11:18:36 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.632 11:18:36 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:08.632 ************************************ 00:07:08.632 END TEST accel_dualcast 00:07:08.632 ************************************ 00:07:08.632 11:18:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.632 11:18:36 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:08.632 11:18:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.632 11:18:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.632 11:18:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.632 ************************************ 00:07:08.632 START TEST accel_compare 00:07:08.632 ************************************ 00:07:08.632 11:18:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:08.632 [2024-07-15 11:18:37.038914] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:08.632 [2024-07-15 11:18:37.039014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349992 ] 00:07:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.632 [2024-07-15 11:18:37.101898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.632 [2024-07-15 11:18:37.171267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.632 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.633 11:18:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:10.023 11:18:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.023 00:07:10.023 real 0m1.292s 00:07:10.023 user 0m1.200s 00:07:10.023 sys 0m0.103s 00:07:10.023 11:18:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.023 11:18:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:10.023 ************************************ 00:07:10.023 END TEST accel_compare 00:07:10.023 ************************************ 00:07:10.023 11:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.023 11:18:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:10.023 11:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.023 11:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.023 11:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.023 ************************************ 00:07:10.023 START TEST accel_xor 00:07:10.023 ************************************ 00:07:10.023 11:18:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.023 [2024-07-15 11:18:38.408441] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:10.023 [2024-07-15 11:18:38.408580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350182 ] 00:07:10.023 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.023 [2024-07-15 11:18:38.482620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.023 [2024-07-15 11:18:38.555553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.023 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 11:18:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:11.007 11:18:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.007 00:07:11.007 real 0m1.307s 00:07:11.007 user 0m1.200s 00:07:11.007 sys 0m0.118s 00:07:11.007 11:18:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.007 11:18:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:11.007 ************************************ 00:07:11.007 END TEST accel_xor 00:07:11.007 ************************************ 00:07:11.267 11:18:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.267 11:18:39 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:11.267 11:18:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.267 11:18:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.267 11:18:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.267 ************************************ 00:07:11.267 START TEST accel_xor 00:07:11.267 ************************************ 00:07:11.267 11:18:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:11.267 [2024-07-15 11:18:39.790437] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:11.267 [2024-07-15 11:18:39.790502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350522 ] 00:07:11.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.267 [2024-07-15 11:18:39.852683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.267 [2024-07-15 11:18:39.922297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.267 11:18:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.665 11:18:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.665 00:07:12.665 real 0m1.291s 00:07:12.665 user 0m1.203s 00:07:12.665 sys 0m0.099s 00:07:12.665 11:18:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.665 11:18:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.665 ************************************ 00:07:12.665 END TEST accel_xor 00:07:12.665 ************************************ 00:07:12.665 11:18:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.665 11:18:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:12.665 11:18:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:12.665 11:18:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.665 11:18:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.665 ************************************ 00:07:12.665 START TEST accel_dif_verify 00:07:12.665 ************************************ 00:07:12.665 11:18:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.665 11:18:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:12.666 [2024-07-15 11:18:41.156354] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:12.666 [2024-07-15 11:18:41.156419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350872 ] 00:07:12.666 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.666 [2024-07-15 11:18:41.218663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.666 [2024-07-15 11:18:41.287980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.666 11:18:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:14.049 11:18:42 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.049 00:07:14.049 real 0m1.289s 00:07:14.049 user 0m1.198s 00:07:14.049 sys 0m0.104s 00:07:14.049 11:18:42 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.049 11:18:42 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.049 ************************************ 00:07:14.049 END TEST accel_dif_verify 00:07:14.049 ************************************ 00:07:14.049 11:18:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.049 11:18:42 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:14.049 11:18:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.049 11:18:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.049 11:18:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.049 ************************************ 00:07:14.049 START TEST accel_dif_generate 00:07:14.049 ************************************ 00:07:14.049 11:18:42 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:14.049 [2024-07-15 11:18:42.518523] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:14.049 [2024-07-15 11:18:42.518587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351219 ] 00:07:14.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.049 [2024-07-15 11:18:42.580704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.049 [2024-07-15 11:18:42.645986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.049 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.050 11:18:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.435 11:18:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:15.436 11:18:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.436 00:07:15.436 real 0m1.285s 00:07:15.436 user 0m1.190s 00:07:15.436 sys 0m0.109s 00:07:15.436 11:18:43 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.436 11:18:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:15.436 ************************************ 00:07:15.436 END TEST accel_dif_generate 00:07:15.436 ************************************ 00:07:15.436 11:18:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.436 11:18:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:15.436 11:18:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:15.436 11:18:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.436 11:18:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.436 ************************************ 00:07:15.436 START TEST accel_dif_generate_copy 00:07:15.436 ************************************ 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.436 11:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.436 [2024-07-15 11:18:43.880476] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:15.436 [2024-07-15 11:18:43.880554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351463 ] 00:07:15.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.436 [2024-07-15 11:18:43.941703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.436 [2024-07-15 11:18:44.006405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.436 11:18:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.845 00:07:16.845 real 0m1.284s 00:07:16.845 user 0m1.197s 00:07:16.845 sys 0m0.100s 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.845 11:18:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.845 ************************************ 00:07:16.845 END TEST accel_dif_generate_copy 00:07:16.845 ************************************ 00:07:16.845 11:18:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.845 11:18:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:16.845 11:18:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.845 11:18:45 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:16.845 11:18:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.845 11:18:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.845 ************************************ 00:07:16.845 START TEST accel_comp 00:07:16.845 ************************************ 00:07:16.845 11:18:45 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:16.845 [2024-07-15 11:18:45.241293] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:16.845 [2024-07-15 11:18:45.241358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351658 ] 00:07:16.845 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.845 [2024-07-15 11:18:45.302812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.845 [2024-07-15 11:18:45.372587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.845 11:18:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:18.231 11:18:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.231 00:07:18.231 real 0m1.292s 00:07:18.231 user 0m1.200s 00:07:18.231 sys 0m0.104s 00:07:18.231 11:18:46 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.232 11:18:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:18.232 ************************************ 00:07:18.232 END TEST accel_comp 00:07:18.232 ************************************ 00:07:18.232 11:18:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.232 11:18:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.232 11:18:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:18.232 11:18:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.232 11:18:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.232 ************************************ 00:07:18.232 START TEST accel_decomp 00:07:18.232 ************************************ 00:07:18.232 11:18:46 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:18.232 [2024-07-15 11:18:46.610718] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:18.232 [2024-07-15 11:18:46.610817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351961 ] 00:07:18.232 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.232 [2024-07-15 11:18:46.682225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.232 [2024-07-15 11:18:46.749147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.232 11:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.616 11:18:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.617 11:18:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.617 00:07:19.617 real 0m1.299s 00:07:19.617 user 0m1.202s 00:07:19.617 sys 0m0.109s 00:07:19.617 11:18:47 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.617 11:18:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:19.617 ************************************ 00:07:19.617 END TEST accel_decomp 00:07:19.617 ************************************ 00:07:19.617 11:18:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.617 11:18:47 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.617 11:18:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:19.617 11:18:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.617 11:18:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.617 ************************************ 00:07:19.617 START TEST accel_decomp_full 00:07:19.617 ************************************ 00:07:19.617 11:18:47 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:19.617 11:18:47 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:19.617 [2024-07-15 11:18:47.986831] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:19.617 [2024-07-15 11:18:47.986926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352314 ] 00:07:19.617 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.617 [2024-07-15 11:18:48.060175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.617 [2024-07-15 11:18:48.130600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.617 11:18:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.999 11:18:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.999 11:18:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.999 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.000 11:18:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.000 00:07:21.000 real 0m1.320s 00:07:21.000 user 0m1.223s 00:07:21.000 sys 0m0.109s 00:07:21.000 11:18:49 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.000 11:18:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:21.000 ************************************ 00:07:21.000 END TEST accel_decomp_full 00:07:21.000 ************************************ 00:07:21.000 11:18:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.000 11:18:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.000 11:18:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:21.000 11:18:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.000 11:18:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.000 ************************************ 00:07:21.000 START TEST accel_decomp_mcore 00:07:21.000 ************************************ 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.000 [2024-07-15 11:18:49.382015] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:21.000 [2024-07-15 11:18:49.382080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352663 ] 00:07:21.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.000 [2024-07-15 11:18:49.445210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.000 [2024-07-15 11:18:49.514845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.000 [2024-07-15 11:18:49.514960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.000 [2024-07-15 11:18:49.515116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.000 [2024-07-15 11:18:49.515117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.000 11:18:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.383 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.383 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.384 00:07:22.384 real 0m1.302s 00:07:22.384 user 0m4.439s 00:07:22.384 sys 0m0.109s 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.384 11:18:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.384 ************************************ 00:07:22.384 END TEST accel_decomp_mcore 00:07:22.384 ************************************ 00:07:22.384 11:18:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.384 11:18:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.384 11:18:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:22.384 11:18:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.384 11:18:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.384 ************************************ 00:07:22.384 START TEST accel_decomp_full_mcore 00:07:22.384 ************************************ 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.384 [2024-07-15 11:18:50.761083] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:22.384 [2024-07-15 11:18:50.761193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352946 ] 00:07:22.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.384 [2024-07-15 11:18:50.835955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.384 [2024-07-15 11:18:50.910896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.384 [2024-07-15 11:18:50.911018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.384 [2024-07-15 11:18:50.911184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.384 [2024-07-15 11:18:50.911184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.384 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.385 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.385 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.385 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.385 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.385 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.385 11:18:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.768 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.769 00:07:23.769 real 0m1.333s 00:07:23.769 user 0m4.499s 00:07:23.769 sys 0m0.123s 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.769 11:18:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:23.769 ************************************ 00:07:23.769 END TEST accel_decomp_full_mcore 00:07:23.769 ************************************ 00:07:23.769 11:18:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.769 11:18:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.769 11:18:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:23.769 11:18:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.769 11:18:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.769 ************************************ 00:07:23.769 START TEST accel_decomp_mthread 00:07:23.769 ************************************ 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:23.769 [2024-07-15 11:18:52.169898] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:23.769 [2024-07-15 11:18:52.169962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353163 ] 00:07:23.769 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.769 [2024-07-15 11:18:52.232053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.769 [2024-07-15 11:18:52.301209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.769 11:18:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.155 00:07:25.155 real 0m1.296s 00:07:25.155 user 0m1.206s 00:07:25.155 sys 0m0.103s 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.155 11:18:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.155 ************************************ 00:07:25.155 END TEST accel_decomp_mthread 00:07:25.155 ************************************ 00:07:25.155 11:18:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.155 11:18:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.155 11:18:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:25.155 11:18:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.155 11:18:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.155 ************************************ 00:07:25.155 START TEST accel_decomp_full_mthread 00:07:25.155 ************************************ 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.155 [2024-07-15 11:18:53.540952] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:25.155 [2024-07-15 11:18:53.541025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353408 ] 00:07:25.155 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.155 [2024-07-15 11:18:53.605468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.155 [2024-07-15 11:18:53.676847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.155 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.156 11:18:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.539 00:07:26.539 real 0m1.327s 00:07:26.539 user 0m1.233s 00:07:26.539 sys 0m0.106s 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.539 11:18:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:26.539 ************************************ 00:07:26.539 END TEST accel_decomp_full_mthread 00:07:26.539 ************************************ 00:07:26.539 11:18:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.539 11:18:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:26.539 11:18:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:26.539 11:18:54 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:26.539 11:18:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:26.539 11:18:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.540 11:18:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.540 11:18:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.540 11:18:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.540 11:18:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.540 11:18:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.540 11:18:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.540 11:18:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:26.540 11:18:54 accel -- accel/accel.sh@41 -- # jq -r . 00:07:26.540 ************************************ 00:07:26.540 START TEST accel_dif_functional_tests 00:07:26.540 ************************************ 00:07:26.540 11:18:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:26.540 [2024-07-15 11:18:54.971061] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:26.540 [2024-07-15 11:18:54.971130] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353756 ] 00:07:26.540 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.540 [2024-07-15 11:18:55.034688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.540 [2024-07-15 11:18:55.110025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.540 [2024-07-15 11:18:55.110160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.540 [2024-07-15 11:18:55.110180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.540 00:07:26.540 00:07:26.540 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.540 http://cunit.sourceforge.net/ 00:07:26.540 00:07:26.540 00:07:26.540 Suite: accel_dif 00:07:26.540 Test: verify: DIF generated, GUARD check ...passed 00:07:26.540 Test: verify: DIF generated, APPTAG check ...passed 00:07:26.540 Test: verify: DIF generated, REFTAG check ...passed 00:07:26.540 Test: verify: DIF not generated, GUARD check ...[2024-07-15 11:18:55.165718] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.540 passed 00:07:26.540 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:18:55.165763] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.540 passed 00:07:26.540 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 11:18:55.165785] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.540 passed 00:07:26.540 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:26.540 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:18:55.165832] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:26.540 passed 00:07:26.540 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:26.540 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:26.540 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:26.540 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 11:18:55.165945] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:26.540 passed 00:07:26.540 Test: verify copy: DIF generated, GUARD check ...passed 00:07:26.540 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:26.540 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:26.540 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:18:55.166063] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:26.540 passed 00:07:26.540 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:18:55.166084] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:26.540 passed 00:07:26.540 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 11:18:55.166106] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:26.540 passed 00:07:26.540 Test: generate copy: DIF generated, GUARD check ...passed 00:07:26.540 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:26.540 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:26.540 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:26.540 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:26.540 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:26.540 Test: generate copy: iovecs-len validate ...[2024-07-15 11:18:55.166295] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:26.540 passed 00:07:26.540 Test: generate copy: buffer alignment validate ...passed 00:07:26.540 00:07:26.540 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.540 suites 1 1 n/a 0 0 00:07:26.540 tests 26 26 26 0 0 00:07:26.540 asserts 115 115 115 0 n/a 00:07:26.540 00:07:26.540 Elapsed time = 0.002 seconds 00:07:26.801 00:07:26.801 real 0m0.367s 00:07:26.801 user 0m0.499s 00:07:26.801 sys 0m0.131s 00:07:26.801 11:18:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.801 11:18:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:26.801 ************************************ 00:07:26.801 END TEST accel_dif_functional_tests 00:07:26.801 ************************************ 00:07:26.801 11:18:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.801 00:07:26.801 real 0m30.202s 00:07:26.801 user 0m33.752s 00:07:26.801 sys 0m4.211s 00:07:26.801 11:18:55 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.801 11:18:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.801 ************************************ 00:07:26.801 END TEST accel 00:07:26.801 ************************************ 00:07:26.801 11:18:55 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.801 11:18:55 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:26.801 11:18:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.801 11:18:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.801 11:18:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.801 ************************************ 00:07:26.801 START TEST accel_rpc 00:07:26.801 ************************************ 00:07:26.801 11:18:55 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:26.801 * Looking for test storage... 00:07:27.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:27.062 11:18:55 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.062 11:18:55 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3353881 00:07:27.062 11:18:55 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3353881 00:07:27.062 11:18:55 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:27.062 11:18:55 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3353881 ']' 00:07:27.062 11:18:55 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.062 11:18:55 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.062 11:18:55 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.062 11:18:55 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.062 11:18:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.062 [2024-07-15 11:18:55.574283] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:27.062 [2024-07-15 11:18:55.574353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353881 ] 00:07:27.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.062 [2024-07-15 11:18:55.637639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.062 [2024-07-15 11:18:55.712492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.632 11:18:56 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.632 11:18:56 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:27.632 11:18:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:27.632 11:18:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:27.632 11:18:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:27.632 11:18:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:27.632 11:18:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:27.632 11:18:56 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.632 11:18:56 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.632 11:18:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.929 ************************************ 00:07:27.929 START TEST accel_assign_opcode 00:07:27.929 ************************************ 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.929 [2024-07-15 11:18:56.358389] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.929 [2024-07-15 11:18:56.370416] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.929 software 00:07:27.929 00:07:27.929 real 0m0.210s 00:07:27.929 user 0m0.046s 00:07:27.929 sys 0m0.014s 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.929 11:18:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:27.929 ************************************ 00:07:27.929 END TEST accel_assign_opcode 00:07:27.929 ************************************ 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:28.227 11:18:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3353881 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3353881 ']' 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3353881 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3353881 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3353881' 00:07:28.227 killing process with pid 3353881 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@967 -- # kill 3353881 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@972 -- # wait 3353881 00:07:28.227 00:07:28.227 real 0m1.464s 00:07:28.227 user 0m1.541s 00:07:28.227 sys 0m0.407s 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.227 11:18:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.227 ************************************ 00:07:28.227 END TEST accel_rpc 00:07:28.227 ************************************ 00:07:28.227 11:18:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.227 11:18:56 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.227 11:18:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.227 11:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.227 11:18:56 -- common/autotest_common.sh@10 -- # set +x 00:07:28.488 ************************************ 00:07:28.488 START TEST app_cmdline 00:07:28.488 ************************************ 00:07:28.488 11:18:56 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:28.488 * Looking for test storage... 00:07:28.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.488 11:18:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.488 11:18:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3354233 00:07:28.488 11:18:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3354233 00:07:28.488 11:18:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.488 11:18:57 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3354233 ']' 00:07:28.488 11:18:57 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.488 11:18:57 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.488 11:18:57 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.488 11:18:57 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.488 11:18:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.488 [2024-07-15 11:18:57.095722] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:28.488 [2024-07-15 11:18:57.095770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354233 ] 00:07:28.488 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.488 [2024-07-15 11:18:57.156084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.748 [2024-07-15 11:18:57.221584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.321 11:18:57 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.321 11:18:57 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:29.321 11:18:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:29.321 { 00:07:29.321 "version": "SPDK v24.09-pre git sha1 3b4b1d00c", 00:07:29.321 "fields": { 00:07:29.321 "major": 24, 00:07:29.321 "minor": 9, 00:07:29.321 "patch": 0, 00:07:29.321 "suffix": "-pre", 00:07:29.321 "commit": "3b4b1d00c" 00:07:29.321 } 00:07:29.321 } 00:07:29.321 11:18:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:29.321 11:18:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:29.321 11:18:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:29.321 11:18:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.583 request: 00:07:29.583 { 00:07:29.583 "method": "env_dpdk_get_mem_stats", 00:07:29.583 "req_id": 1 00:07:29.583 } 00:07:29.583 Got JSON-RPC error response 00:07:29.583 response: 00:07:29.583 { 00:07:29.583 "code": -32601, 00:07:29.583 "message": "Method not found" 00:07:29.583 } 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.583 11:18:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3354233 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3354233 ']' 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3354233 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.583 11:18:58 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3354233 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3354233' 00:07:29.845 killing process with pid 3354233 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@967 -- # kill 3354233 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@972 -- # wait 3354233 00:07:29.845 00:07:29.845 real 0m1.561s 00:07:29.845 user 0m1.900s 00:07:29.845 sys 0m0.383s 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.845 11:18:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.845 ************************************ 00:07:29.845 END TEST app_cmdline 00:07:29.845 ************************************ 00:07:29.845 11:18:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.845 11:18:58 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.845 11:18:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.845 11:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.845 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:30.106 ************************************ 00:07:30.106 START TEST version 00:07:30.106 ************************************ 00:07:30.106 11:18:58 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:30.106 * Looking for test storage... 00:07:30.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:30.106 11:18:58 version -- app/version.sh@17 -- # get_header_version major 00:07:30.106 11:18:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # cut -f2 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.106 11:18:58 version -- app/version.sh@17 -- # major=24 00:07:30.106 11:18:58 version -- app/version.sh@18 -- # get_header_version minor 00:07:30.106 11:18:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # cut -f2 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.106 11:18:58 version -- app/version.sh@18 -- # minor=9 00:07:30.106 11:18:58 version -- app/version.sh@19 -- # get_header_version patch 00:07:30.106 11:18:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # cut -f2 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.106 11:18:58 version -- app/version.sh@19 -- # patch=0 00:07:30.106 11:18:58 version -- app/version.sh@20 -- # get_header_version suffix 00:07:30.106 11:18:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # cut -f2 00:07:30.106 11:18:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:30.106 11:18:58 version -- app/version.sh@20 -- # suffix=-pre 00:07:30.106 11:18:58 version -- app/version.sh@22 -- # version=24.9 00:07:30.106 11:18:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:30.106 11:18:58 version -- app/version.sh@28 -- # version=24.9rc0 00:07:30.106 11:18:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:30.106 11:18:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:30.106 11:18:58 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:30.106 11:18:58 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:30.106 00:07:30.106 real 0m0.169s 00:07:30.106 user 0m0.089s 00:07:30.106 sys 0m0.117s 00:07:30.106 11:18:58 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.106 11:18:58 version -- common/autotest_common.sh@10 -- # set +x 00:07:30.106 ************************************ 00:07:30.106 END TEST version 00:07:30.107 ************************************ 00:07:30.107 11:18:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.107 11:18:58 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:30.107 11:18:58 -- spdk/autotest.sh@198 -- # uname -s 00:07:30.107 11:18:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:30.107 11:18:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.107 11:18:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:30.107 11:18:58 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:30.107 11:18:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:30.107 11:18:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:30.107 11:18:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.107 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:30.369 11:18:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:30.369 11:18:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:30.369 11:18:58 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:30.369 11:18:58 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:30.369 11:18:58 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:30.369 11:18:58 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:30.369 11:18:58 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.369 11:18:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.369 11:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.369 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:30.369 ************************************ 00:07:30.369 START TEST nvmf_tcp 00:07:30.369 ************************************ 00:07:30.369 11:18:58 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.369 * Looking for test storage... 00:07:30.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.369 11:18:58 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.369 11:18:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.369 11:18:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.369 11:18:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.369 11:18:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.369 11:18:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.369 11:18:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.369 11:18:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:30.370 11:18:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:30.370 11:18:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.370 11:18:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:30.370 11:18:59 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:30.370 11:18:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:30.370 11:18:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.370 11:18:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.370 ************************************ 00:07:30.370 START TEST nvmf_example 00:07:30.370 ************************************ 00:07:30.370 11:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:30.632 * Looking for test storage... 00:07:30.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.632 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.633 11:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:37.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:37.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.252 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:37.253 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:37.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.253 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.514 11:19:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.514 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.514 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.514 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.514 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.514 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.514 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:07:37.774 00:07:37.774 --- 10.0.0.2 ping statistics --- 00:07:37.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.774 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:07:37.774 00:07:37.774 --- 10.0.0.1 ping statistics --- 00:07:37.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.774 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3358557 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3358557 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3358557 ']' 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.774 11:19:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:38.717 11:19:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:38.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.719 Initializing NVMe Controllers 00:07:48.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:48.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:48.719 Initialization complete. Launching workers. 00:07:48.719 ======================================================== 00:07:48.719 Latency(us) 00:07:48.719 Device Information : IOPS MiB/s Average min max 00:07:48.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17446.47 68.15 3668.17 847.39 16682.83 00:07:48.719 ======================================================== 00:07:48.719 Total : 17446.47 68.15 3668.17 847.39 16682.83 00:07:48.719 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.719 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.719 rmmod nvme_tcp 00:07:48.980 rmmod nvme_fabrics 00:07:48.980 rmmod nvme_keyring 00:07:48.980 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.980 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:48.980 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:48.980 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3358557 ']' 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3358557 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3358557 ']' 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3358557 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3358557 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3358557' 00:07:48.981 killing process with pid 3358557 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3358557 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3358557 00:07:48.981 nvmf threads initialize successfully 00:07:48.981 bdev subsystem init successfully 00:07:48.981 created a nvmf target service 00:07:48.981 create targets's poll groups done 00:07:48.981 all subsystems of target started 00:07:48.981 nvmf target is running 00:07:48.981 all subsystems of target stopped 00:07:48.981 destroy targets's poll groups done 00:07:48.981 destroyed the nvmf target service 00:07:48.981 bdev subsystem finish successfully 00:07:48.981 nvmf threads destroy successfully 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.981 11:19:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.526 11:19:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.526 11:19:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:51.526 11:19:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.526 11:19:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.526 00:07:51.526 real 0m20.707s 00:07:51.526 user 0m46.265s 00:07:51.526 sys 0m6.270s 00:07:51.526 11:19:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.526 11:19:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.526 ************************************ 00:07:51.526 END TEST nvmf_example 00:07:51.526 ************************************ 00:07:51.526 11:19:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.526 11:19:19 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:51.526 11:19:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.526 11:19:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.526 11:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.526 ************************************ 00:07:51.526 START TEST nvmf_filesystem 00:07:51.526 ************************************ 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:51.526 * Looking for test storage... 00:07:51.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:51.526 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:51.527 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:51.527 #define SPDK_CONFIG_H 00:07:51.527 #define SPDK_CONFIG_APPS 1 00:07:51.527 #define SPDK_CONFIG_ARCH native 00:07:51.527 #undef SPDK_CONFIG_ASAN 00:07:51.527 #undef SPDK_CONFIG_AVAHI 00:07:51.527 #undef SPDK_CONFIG_CET 00:07:51.527 #define SPDK_CONFIG_COVERAGE 1 00:07:51.527 #define SPDK_CONFIG_CROSS_PREFIX 00:07:51.527 #undef SPDK_CONFIG_CRYPTO 00:07:51.527 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:51.527 #undef SPDK_CONFIG_CUSTOMOCF 00:07:51.527 #undef SPDK_CONFIG_DAOS 00:07:51.527 #define SPDK_CONFIG_DAOS_DIR 00:07:51.527 #define SPDK_CONFIG_DEBUG 1 00:07:51.527 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:51.527 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:51.527 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:51.527 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:51.527 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:51.527 #undef SPDK_CONFIG_DPDK_UADK 00:07:51.527 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:51.527 #define SPDK_CONFIG_EXAMPLES 1 00:07:51.527 #undef SPDK_CONFIG_FC 00:07:51.527 #define SPDK_CONFIG_FC_PATH 00:07:51.527 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:51.527 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:51.527 #undef SPDK_CONFIG_FUSE 00:07:51.527 #undef SPDK_CONFIG_FUZZER 00:07:51.527 #define SPDK_CONFIG_FUZZER_LIB 00:07:51.527 #undef SPDK_CONFIG_GOLANG 00:07:51.527 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:51.527 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:51.527 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:51.527 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:51.527 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:51.527 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:51.527 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:51.527 #define SPDK_CONFIG_IDXD 1 00:07:51.527 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:51.527 #undef SPDK_CONFIG_IPSEC_MB 00:07:51.527 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:51.527 #define SPDK_CONFIG_ISAL 1 00:07:51.527 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:51.527 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:51.527 #define SPDK_CONFIG_LIBDIR 00:07:51.527 #undef SPDK_CONFIG_LTO 00:07:51.527 #define SPDK_CONFIG_MAX_LCORES 128 00:07:51.527 #define SPDK_CONFIG_NVME_CUSE 1 00:07:51.527 #undef SPDK_CONFIG_OCF 00:07:51.527 #define SPDK_CONFIG_OCF_PATH 00:07:51.527 #define SPDK_CONFIG_OPENSSL_PATH 00:07:51.527 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:51.527 #define SPDK_CONFIG_PGO_DIR 00:07:51.527 #undef SPDK_CONFIG_PGO_USE 00:07:51.527 #define SPDK_CONFIG_PREFIX /usr/local 00:07:51.527 #undef SPDK_CONFIG_RAID5F 00:07:51.527 #undef SPDK_CONFIG_RBD 00:07:51.527 #define SPDK_CONFIG_RDMA 1 00:07:51.527 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:51.527 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:51.527 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:51.527 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:51.527 #define SPDK_CONFIG_SHARED 1 00:07:51.527 #undef SPDK_CONFIG_SMA 00:07:51.527 #define SPDK_CONFIG_TESTS 1 00:07:51.527 #undef SPDK_CONFIG_TSAN 00:07:51.527 #define SPDK_CONFIG_UBLK 1 00:07:51.527 #define SPDK_CONFIG_UBSAN 1 00:07:51.527 #undef SPDK_CONFIG_UNIT_TESTS 00:07:51.527 #undef SPDK_CONFIG_URING 00:07:51.527 #define SPDK_CONFIG_URING_PATH 00:07:51.527 #undef SPDK_CONFIG_URING_ZNS 00:07:51.527 #undef SPDK_CONFIG_USDT 00:07:51.527 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:51.527 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:51.527 #define SPDK_CONFIG_VFIO_USER 1 00:07:51.527 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:51.527 #define SPDK_CONFIG_VHOST 1 00:07:51.527 #define SPDK_CONFIG_VIRTIO 1 00:07:51.527 #undef SPDK_CONFIG_VTUNE 00:07:51.528 #define SPDK_CONFIG_VTUNE_DIR 00:07:51.528 #define SPDK_CONFIG_WERROR 1 00:07:51.528 #define SPDK_CONFIG_WPDK_DIR 00:07:51.528 #undef SPDK_CONFIG_XNVME 00:07:51.528 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:51.528 11:19:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:51.528 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:51.529 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3361331 ]] 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3361331 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Pzxmb1 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Pzxmb1/tests/target /tmp/spdk.Pzxmb1 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118670532608 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10700480512 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684273664 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1232896 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:51.530 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:51.531 * Looking for test storage... 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118670532608 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12915073024 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.531 11:19:20 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.532 11:19:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:58.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:58.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:58.212 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:58.212 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:58.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:07:58.212 00:07:58.212 --- 10.0.0.2 ping statistics --- 00:07:58.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.212 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:07:58.212 00:07:58.212 --- 10.0.0.1 ping statistics --- 00:07:58.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.212 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:58.212 11:19:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.474 ************************************ 00:07:58.474 START TEST nvmf_filesystem_no_in_capsule 00:07:58.474 ************************************ 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3364985 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3364985 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3364985 ']' 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.474 11:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.474 [2024-07-15 11:19:27.018885] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:07:58.474 [2024-07-15 11:19:27.018945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.474 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.474 [2024-07-15 11:19:27.091621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.474 [2024-07-15 11:19:27.170676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.474 [2024-07-15 11:19:27.170718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.474 [2024-07-15 11:19:27.170726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.474 [2024-07-15 11:19:27.170733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.474 [2024-07-15 11:19:27.170738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.474 [2024-07-15 11:19:27.170905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.474 [2024-07-15 11:19:27.171049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.474 [2024-07-15 11:19:27.171207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.474 [2024-07-15 11:19:27.171370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.416 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 [2024-07-15 11:19:27.848737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 Malloc1 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 [2024-07-15 11:19:27.978608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.417 11:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:59.417 { 00:07:59.417 "name": "Malloc1", 00:07:59.417 "aliases": [ 00:07:59.417 "2fdd6cab-65e2-4e14-8ead-1e750c4c1349" 00:07:59.417 ], 00:07:59.417 "product_name": "Malloc disk", 00:07:59.417 "block_size": 512, 00:07:59.417 "num_blocks": 1048576, 00:07:59.417 "uuid": "2fdd6cab-65e2-4e14-8ead-1e750c4c1349", 00:07:59.417 "assigned_rate_limits": { 00:07:59.417 "rw_ios_per_sec": 0, 00:07:59.417 "rw_mbytes_per_sec": 0, 00:07:59.417 "r_mbytes_per_sec": 0, 00:07:59.417 "w_mbytes_per_sec": 0 00:07:59.417 }, 00:07:59.417 "claimed": true, 00:07:59.417 "claim_type": "exclusive_write", 00:07:59.417 "zoned": false, 00:07:59.417 "supported_io_types": { 00:07:59.417 "read": true, 00:07:59.417 "write": true, 00:07:59.417 "unmap": true, 00:07:59.417 "flush": true, 00:07:59.417 "reset": true, 00:07:59.417 "nvme_admin": false, 00:07:59.417 "nvme_io": false, 00:07:59.417 "nvme_io_md": false, 00:07:59.417 "write_zeroes": true, 00:07:59.417 "zcopy": true, 00:07:59.417 "get_zone_info": false, 00:07:59.417 "zone_management": false, 00:07:59.417 "zone_append": false, 00:07:59.417 "compare": false, 00:07:59.417 "compare_and_write": false, 00:07:59.417 "abort": true, 00:07:59.417 "seek_hole": false, 00:07:59.417 "seek_data": false, 00:07:59.417 "copy": true, 00:07:59.417 "nvme_iov_md": false 00:07:59.417 }, 00:07:59.417 "memory_domains": [ 00:07:59.417 { 00:07:59.417 "dma_device_id": "system", 00:07:59.417 "dma_device_type": 1 00:07:59.417 }, 00:07:59.417 { 00:07:59.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.417 "dma_device_type": 2 00:07:59.417 } 00:07:59.417 ], 00:07:59.417 "driver_specific": {} 00:07:59.417 } 00:07:59.417 ]' 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:59.417 11:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:01.330 11:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:01.330 11:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:01.330 11:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:01.330 11:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:01.330 11:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:03.239 11:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:03.809 11:19:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.749 ************************************ 00:08:04.749 START TEST filesystem_ext4 00:08:04.749 ************************************ 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:04.749 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:04.749 mke2fs 1.46.5 (30-Dec-2021) 00:08:05.009 Discarding device blocks: 0/522240 done 00:08:05.009 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:05.009 Filesystem UUID: 2f444cdf-81a0-4d81-8222-55cc05205b34 00:08:05.009 Superblock backups stored on blocks: 00:08:05.009 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:05.009 00:08:05.009 Allocating group tables: 0/64 done 00:08:05.009 Writing inode tables: 0/64 done 00:08:05.009 Creating journal (8192 blocks): done 00:08:05.269 Writing superblocks and filesystem accounting information: 0/64 done 00:08:05.269 00:08:05.269 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:05.269 11:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3364985 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.840 00:08:05.840 real 0m0.999s 00:08:05.840 user 0m0.029s 00:08:05.840 sys 0m0.067s 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 ************************************ 00:08:05.840 END TEST filesystem_ext4 00:08:05.840 ************************************ 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 ************************************ 00:08:05.840 START TEST filesystem_btrfs 00:08:05.840 ************************************ 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:05.840 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:05.841 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:05.841 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:05.841 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:05.841 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:06.101 btrfs-progs v6.6.2 00:08:06.101 See https://btrfs.readthedocs.io for more information. 00:08:06.101 00:08:06.101 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:06.101 NOTE: several default settings have changed in version 5.15, please make sure 00:08:06.101 this does not affect your deployments: 00:08:06.101 - DUP for metadata (-m dup) 00:08:06.101 - enabled no-holes (-O no-holes) 00:08:06.101 - enabled free-space-tree (-R free-space-tree) 00:08:06.101 00:08:06.101 Label: (null) 00:08:06.101 UUID: 4201ef16-eb13-4f08-9ab1-a4c905117e0c 00:08:06.101 Node size: 16384 00:08:06.101 Sector size: 4096 00:08:06.101 Filesystem size: 510.00MiB 00:08:06.101 Block group profiles: 00:08:06.101 Data: single 8.00MiB 00:08:06.101 Metadata: DUP 32.00MiB 00:08:06.101 System: DUP 8.00MiB 00:08:06.101 SSD detected: yes 00:08:06.101 Zoned device: no 00:08:06.101 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:06.101 Runtime features: free-space-tree 00:08:06.101 Checksum: crc32c 00:08:06.101 Number of devices: 1 00:08:06.101 Devices: 00:08:06.101 ID SIZE PATH 00:08:06.101 1 510.00MiB /dev/nvme0n1p1 00:08:06.101 00:08:06.101 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.101 11:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3364985 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.672 00:08:06.672 real 0m0.643s 00:08:06.672 user 0m0.029s 00:08:06.672 sys 0m0.132s 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.672 ************************************ 00:08:06.672 END TEST filesystem_btrfs 00:08:06.672 ************************************ 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.672 ************************************ 00:08:06.672 START TEST filesystem_xfs 00:08:06.672 ************************************ 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:06.672 11:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.672 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.672 = sectsz=512 attr=2, projid32bit=1 00:08:06.672 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.672 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.672 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.672 = sunit=0 swidth=0 blks 00:08:06.672 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.672 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.672 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.672 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:07.614 Discarding blocks...Done. 00:08:07.614 11:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:07.614 11:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3364985 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.528 00:08:09.528 real 0m2.758s 00:08:09.528 user 0m0.035s 00:08:09.528 sys 0m0.068s 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.528 11:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.528 ************************************ 00:08:09.528 END TEST filesystem_xfs 00:08:09.528 ************************************ 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3364985 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3364985 ']' 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3364985 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.528 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3364985 00:08:09.789 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.789 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.789 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3364985' 00:08:09.789 killing process with pid 3364985 00:08:09.789 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3364985 00:08:09.789 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3364985 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.050 00:08:10.050 real 0m11.543s 00:08:10.050 user 0m45.402s 00:08:10.050 sys 0m1.166s 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 ************************************ 00:08:10.050 END TEST nvmf_filesystem_no_in_capsule 00:08:10.050 ************************************ 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 ************************************ 00:08:10.050 START TEST nvmf_filesystem_in_capsule 00:08:10.050 ************************************ 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3367358 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3367358 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3367358 ']' 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.050 11:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.050 [2024-07-15 11:19:38.640509] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:08:10.050 [2024-07-15 11:19:38.640559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.050 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.050 [2024-07-15 11:19:38.706576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.315 [2024-07-15 11:19:38.773589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.315 [2024-07-15 11:19:38.773625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.315 [2024-07-15 11:19:38.773633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.315 [2024-07-15 11:19:38.773639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.316 [2024-07-15 11:19:38.773644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.316 [2024-07-15 11:19:38.773794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.316 [2024-07-15 11:19:38.773900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.316 [2024-07-15 11:19:38.774056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.316 [2024-07-15 11:19:38.774058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:10.890 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 [2024-07-15 11:19:39.466841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 Malloc1 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.891 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.151 [2024-07-15 11:19:39.592384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.151 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:11.151 { 00:08:11.151 "name": "Malloc1", 00:08:11.151 "aliases": [ 00:08:11.151 "49c5bfdf-3878-4aee-a983-51283ced0cdc" 00:08:11.151 ], 00:08:11.151 "product_name": "Malloc disk", 00:08:11.151 "block_size": 512, 00:08:11.151 "num_blocks": 1048576, 00:08:11.151 "uuid": "49c5bfdf-3878-4aee-a983-51283ced0cdc", 00:08:11.151 "assigned_rate_limits": { 00:08:11.151 "rw_ios_per_sec": 0, 00:08:11.151 "rw_mbytes_per_sec": 0, 00:08:11.151 "r_mbytes_per_sec": 0, 00:08:11.151 "w_mbytes_per_sec": 0 00:08:11.151 }, 00:08:11.151 "claimed": true, 00:08:11.151 "claim_type": "exclusive_write", 00:08:11.151 "zoned": false, 00:08:11.151 "supported_io_types": { 00:08:11.152 "read": true, 00:08:11.152 "write": true, 00:08:11.152 "unmap": true, 00:08:11.152 "flush": true, 00:08:11.152 "reset": true, 00:08:11.152 "nvme_admin": false, 00:08:11.152 "nvme_io": false, 00:08:11.152 "nvme_io_md": false, 00:08:11.152 "write_zeroes": true, 00:08:11.152 "zcopy": true, 00:08:11.152 "get_zone_info": false, 00:08:11.152 "zone_management": false, 00:08:11.152 "zone_append": false, 00:08:11.152 "compare": false, 00:08:11.152 "compare_and_write": false, 00:08:11.152 "abort": true, 00:08:11.152 "seek_hole": false, 00:08:11.152 "seek_data": false, 00:08:11.152 "copy": true, 00:08:11.152 "nvme_iov_md": false 00:08:11.152 }, 00:08:11.152 "memory_domains": [ 00:08:11.152 { 00:08:11.152 "dma_device_id": "system", 00:08:11.152 "dma_device_type": 1 00:08:11.152 }, 00:08:11.152 { 00:08:11.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.152 "dma_device_type": 2 00:08:11.152 } 00:08:11.152 ], 00:08:11.152 "driver_specific": {} 00:08:11.152 } 00:08:11.152 ]' 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:11.152 11:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.063 11:19:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.063 11:19:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:13.063 11:19:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.063 11:19:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:13.063 11:19:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:14.973 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:15.234 11:19:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.234 ************************************ 00:08:16.234 START TEST filesystem_in_capsule_ext4 00:08:16.234 ************************************ 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:16.234 11:19:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:16.234 mke2fs 1.46.5 (30-Dec-2021) 00:08:16.234 Discarding device blocks: 0/522240 done 00:08:16.234 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.234 Filesystem UUID: f1410219-c4e6-4df7-8d2c-798569a4ed04 00:08:16.234 Superblock backups stored on blocks: 00:08:16.234 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.234 00:08:16.234 Allocating group tables: 0/64 done 00:08:16.234 Writing inode tables: 0/64 done 00:08:16.495 Creating journal (8192 blocks): done 00:08:17.464 Writing superblocks and filesystem accounting information: 0/64 done 00:08:17.464 00:08:17.464 11:19:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:17.464 11:19:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.464 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.464 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:17.464 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.464 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:17.464 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:17.464 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3367358 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.724 00:08:17.724 real 0m1.448s 00:08:17.724 user 0m0.034s 00:08:17.724 sys 0m0.063s 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 ************************************ 00:08:17.724 END TEST filesystem_in_capsule_ext4 00:08:17.724 ************************************ 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 ************************************ 00:08:17.724 START TEST filesystem_in_capsule_btrfs 00:08:17.724 ************************************ 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:17.724 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:18.294 btrfs-progs v6.6.2 00:08:18.294 See https://btrfs.readthedocs.io for more information. 00:08:18.294 00:08:18.294 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:18.294 NOTE: several default settings have changed in version 5.15, please make sure 00:08:18.294 this does not affect your deployments: 00:08:18.294 - DUP for metadata (-m dup) 00:08:18.294 - enabled no-holes (-O no-holes) 00:08:18.294 - enabled free-space-tree (-R free-space-tree) 00:08:18.294 00:08:18.294 Label: (null) 00:08:18.294 UUID: a8a87313-da01-49af-8563-0a794c07293e 00:08:18.294 Node size: 16384 00:08:18.294 Sector size: 4096 00:08:18.294 Filesystem size: 510.00MiB 00:08:18.294 Block group profiles: 00:08:18.294 Data: single 8.00MiB 00:08:18.294 Metadata: DUP 32.00MiB 00:08:18.294 System: DUP 8.00MiB 00:08:18.294 SSD detected: yes 00:08:18.294 Zoned device: no 00:08:18.294 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:18.294 Runtime features: free-space-tree 00:08:18.294 Checksum: crc32c 00:08:18.294 Number of devices: 1 00:08:18.294 Devices: 00:08:18.294 ID SIZE PATH 00:08:18.294 1 510.00MiB /dev/nvme0n1p1 00:08:18.294 00:08:18.294 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:18.294 11:19:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3367358 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.235 00:08:19.235 real 0m1.374s 00:08:19.235 user 0m0.037s 00:08:19.235 sys 0m0.128s 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:19.235 ************************************ 00:08:19.235 END TEST filesystem_in_capsule_btrfs 00:08:19.235 ************************************ 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.235 ************************************ 00:08:19.235 START TEST filesystem_in_capsule_xfs 00:08:19.235 ************************************ 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:19.235 11:19:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:19.235 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:19.235 = sectsz=512 attr=2, projid32bit=1 00:08:19.235 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:19.235 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:19.235 data = bsize=4096 blocks=130560, imaxpct=25 00:08:19.235 = sunit=0 swidth=0 blks 00:08:19.235 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:19.235 log =internal log bsize=4096 blocks=16384, version=2 00:08:19.235 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:19.235 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.176 Discarding blocks...Done. 00:08:20.176 11:19:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:20.176 11:19:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3367358 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.088 00:08:22.088 real 0m2.824s 00:08:22.088 user 0m0.017s 00:08:22.088 sys 0m0.085s 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:22.088 ************************************ 00:08:22.088 END TEST filesystem_in_capsule_xfs 00:08:22.088 ************************************ 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:22.088 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3367358 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3367358 ']' 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3367358 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3367358 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3367358' 00:08:22.349 killing process with pid 3367358 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3367358 00:08:22.349 11:19:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3367358 00:08:22.610 11:19:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.610 00:08:22.610 real 0m12.546s 00:08:22.610 user 0m49.479s 00:08:22.610 sys 0m1.184s 00:08:22.610 11:19:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.610 11:19:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.610 ************************************ 00:08:22.610 END TEST nvmf_filesystem_in_capsule 00:08:22.610 ************************************ 00:08:22.610 11:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:22.610 11:19:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:22.610 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.611 rmmod nvme_tcp 00:08:22.611 rmmod nvme_fabrics 00:08:22.611 rmmod nvme_keyring 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.611 11:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.153 11:19:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.153 00:08:25.153 real 0m33.468s 00:08:25.153 user 1m36.927s 00:08:25.153 sys 0m7.585s 00:08:25.153 11:19:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.153 11:19:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 ************************************ 00:08:25.153 END TEST nvmf_filesystem 00:08:25.153 ************************************ 00:08:25.153 11:19:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.153 11:19:53 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.153 11:19:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.153 11:19:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.153 11:19:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 ************************************ 00:08:25.153 START TEST nvmf_target_discovery 00:08:25.153 ************************************ 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.153 * Looking for test storage... 00:08:25.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.153 11:19:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.154 11:19:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:31.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:31.740 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.740 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:31.741 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:31.741 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.741 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:08:32.003 00:08:32.003 --- 10.0.0.2 ping statistics --- 00:08:32.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.003 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:08:32.003 00:08:32.003 --- 10.0.0.1 ping statistics --- 00:08:32.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.003 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3374237 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3374237 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3374237 ']' 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.003 11:20:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.003 [2024-07-15 11:20:00.659001] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:08:32.003 [2024-07-15 11:20:00.659069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.003 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.264 [2024-07-15 11:20:00.732265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.264 [2024-07-15 11:20:00.806135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.264 [2024-07-15 11:20:00.806177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.264 [2024-07-15 11:20:00.806184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.264 [2024-07-15 11:20:00.806191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.264 [2024-07-15 11:20:00.806196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.264 [2024-07-15 11:20:00.806281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.264 [2024-07-15 11:20:00.806416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.264 [2024-07-15 11:20:00.806574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.264 [2024-07-15 11:20:00.806574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.835 [2024-07-15 11:20:01.479749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.835 Null1 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.835 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.095 [2024-07-15 11:20:01.540083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 Null2 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 Null3 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 Null4 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.096 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:33.357 00:08:33.357 Discovery Log Number of Records 6, Generation counter 6 00:08:33.357 =====Discovery Log Entry 0====== 00:08:33.357 trtype: tcp 00:08:33.357 adrfam: ipv4 00:08:33.357 subtype: current discovery subsystem 00:08:33.357 treq: not required 00:08:33.357 portid: 0 00:08:33.357 trsvcid: 4420 00:08:33.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:33.357 traddr: 10.0.0.2 00:08:33.357 eflags: explicit discovery connections, duplicate discovery information 00:08:33.357 sectype: none 00:08:33.357 =====Discovery Log Entry 1====== 00:08:33.357 trtype: tcp 00:08:33.357 adrfam: ipv4 00:08:33.357 subtype: nvme subsystem 00:08:33.357 treq: not required 00:08:33.357 portid: 0 00:08:33.357 trsvcid: 4420 00:08:33.357 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:33.357 traddr: 10.0.0.2 00:08:33.357 eflags: none 00:08:33.357 sectype: none 00:08:33.357 =====Discovery Log Entry 2====== 00:08:33.357 trtype: tcp 00:08:33.357 adrfam: ipv4 00:08:33.357 subtype: nvme subsystem 00:08:33.357 treq: not required 00:08:33.357 portid: 0 00:08:33.357 trsvcid: 4420 00:08:33.357 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:33.357 traddr: 10.0.0.2 00:08:33.357 eflags: none 00:08:33.357 sectype: none 00:08:33.357 =====Discovery Log Entry 3====== 00:08:33.357 trtype: tcp 00:08:33.357 adrfam: ipv4 00:08:33.357 subtype: nvme subsystem 00:08:33.357 treq: not required 00:08:33.357 portid: 0 00:08:33.357 trsvcid: 4420 00:08:33.357 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:33.357 traddr: 10.0.0.2 00:08:33.357 eflags: none 00:08:33.357 sectype: none 00:08:33.357 =====Discovery Log Entry 4====== 00:08:33.357 trtype: tcp 00:08:33.357 adrfam: ipv4 00:08:33.357 subtype: nvme subsystem 00:08:33.357 treq: not required 00:08:33.357 portid: 0 00:08:33.357 trsvcid: 4420 00:08:33.357 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:33.357 traddr: 10.0.0.2 00:08:33.357 eflags: none 00:08:33.357 sectype: none 00:08:33.357 =====Discovery Log Entry 5====== 00:08:33.357 trtype: tcp 00:08:33.357 adrfam: ipv4 00:08:33.357 subtype: discovery subsystem referral 00:08:33.357 treq: not required 00:08:33.357 portid: 0 00:08:33.357 trsvcid: 4430 00:08:33.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:33.357 traddr: 10.0.0.2 00:08:33.357 eflags: none 00:08:33.357 sectype: none 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:33.357 Perform nvmf subsystem discovery via RPC 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.357 [ 00:08:33.357 { 00:08:33.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:33.357 "subtype": "Discovery", 00:08:33.357 "listen_addresses": [ 00:08:33.357 { 00:08:33.357 "trtype": "TCP", 00:08:33.357 "adrfam": "IPv4", 00:08:33.357 "traddr": "10.0.0.2", 00:08:33.357 "trsvcid": "4420" 00:08:33.357 } 00:08:33.357 ], 00:08:33.357 "allow_any_host": true, 00:08:33.357 "hosts": [] 00:08:33.357 }, 00:08:33.357 { 00:08:33.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.357 "subtype": "NVMe", 00:08:33.357 "listen_addresses": [ 00:08:33.357 { 00:08:33.357 "trtype": "TCP", 00:08:33.357 "adrfam": "IPv4", 00:08:33.357 "traddr": "10.0.0.2", 00:08:33.357 "trsvcid": "4420" 00:08:33.357 } 00:08:33.357 ], 00:08:33.357 "allow_any_host": true, 00:08:33.357 "hosts": [], 00:08:33.357 "serial_number": "SPDK00000000000001", 00:08:33.357 "model_number": "SPDK bdev Controller", 00:08:33.357 "max_namespaces": 32, 00:08:33.357 "min_cntlid": 1, 00:08:33.357 "max_cntlid": 65519, 00:08:33.357 "namespaces": [ 00:08:33.357 { 00:08:33.357 "nsid": 1, 00:08:33.357 "bdev_name": "Null1", 00:08:33.357 "name": "Null1", 00:08:33.357 "nguid": "D290B01BF29945C7B396001115E841D1", 00:08:33.357 "uuid": "d290b01b-f299-45c7-b396-001115e841d1" 00:08:33.357 } 00:08:33.357 ] 00:08:33.357 }, 00:08:33.357 { 00:08:33.357 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:33.357 "subtype": "NVMe", 00:08:33.357 "listen_addresses": [ 00:08:33.357 { 00:08:33.357 "trtype": "TCP", 00:08:33.357 "adrfam": "IPv4", 00:08:33.357 "traddr": "10.0.0.2", 00:08:33.357 "trsvcid": "4420" 00:08:33.357 } 00:08:33.357 ], 00:08:33.357 "allow_any_host": true, 00:08:33.357 "hosts": [], 00:08:33.357 "serial_number": "SPDK00000000000002", 00:08:33.357 "model_number": "SPDK bdev Controller", 00:08:33.357 "max_namespaces": 32, 00:08:33.357 "min_cntlid": 1, 00:08:33.357 "max_cntlid": 65519, 00:08:33.357 "namespaces": [ 00:08:33.357 { 00:08:33.357 "nsid": 1, 00:08:33.357 "bdev_name": "Null2", 00:08:33.357 "name": "Null2", 00:08:33.357 "nguid": "0F16B7A3FE544C518A9606BD490B9C27", 00:08:33.357 "uuid": "0f16b7a3-fe54-4c51-8a96-06bd490b9c27" 00:08:33.357 } 00:08:33.357 ] 00:08:33.357 }, 00:08:33.357 { 00:08:33.357 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:33.357 "subtype": "NVMe", 00:08:33.357 "listen_addresses": [ 00:08:33.357 { 00:08:33.357 "trtype": "TCP", 00:08:33.357 "adrfam": "IPv4", 00:08:33.357 "traddr": "10.0.0.2", 00:08:33.357 "trsvcid": "4420" 00:08:33.357 } 00:08:33.357 ], 00:08:33.357 "allow_any_host": true, 00:08:33.357 "hosts": [], 00:08:33.357 "serial_number": "SPDK00000000000003", 00:08:33.357 "model_number": "SPDK bdev Controller", 00:08:33.357 "max_namespaces": 32, 00:08:33.357 "min_cntlid": 1, 00:08:33.357 "max_cntlid": 65519, 00:08:33.357 "namespaces": [ 00:08:33.357 { 00:08:33.357 "nsid": 1, 00:08:33.357 "bdev_name": "Null3", 00:08:33.357 "name": "Null3", 00:08:33.357 "nguid": "F4CFD68BAED94EB4A62C3D4ECFE5B45D", 00:08:33.357 "uuid": "f4cfd68b-aed9-4eb4-a62c-3d4ecfe5b45d" 00:08:33.357 } 00:08:33.357 ] 00:08:33.357 }, 00:08:33.357 { 00:08:33.357 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:33.357 "subtype": "NVMe", 00:08:33.357 "listen_addresses": [ 00:08:33.357 { 00:08:33.357 "trtype": "TCP", 00:08:33.357 "adrfam": "IPv4", 00:08:33.357 "traddr": "10.0.0.2", 00:08:33.357 "trsvcid": "4420" 00:08:33.357 } 00:08:33.357 ], 00:08:33.357 "allow_any_host": true, 00:08:33.357 "hosts": [], 00:08:33.357 "serial_number": "SPDK00000000000004", 00:08:33.357 "model_number": "SPDK bdev Controller", 00:08:33.357 "max_namespaces": 32, 00:08:33.357 "min_cntlid": 1, 00:08:33.357 "max_cntlid": 65519, 00:08:33.357 "namespaces": [ 00:08:33.357 { 00:08:33.357 "nsid": 1, 00:08:33.357 "bdev_name": "Null4", 00:08:33.357 "name": "Null4", 00:08:33.357 "nguid": "6A109ACB0B9F4B5589D74B6D8838EB23", 00:08:33.357 "uuid": "6a109acb-0b9f-4b55-89d7-4b6d8838eb23" 00:08:33.357 } 00:08:33.357 ] 00:08:33.357 } 00:08:33.357 ] 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.357 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 11:20:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.358 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.358 rmmod nvme_tcp 00:08:33.358 rmmod nvme_fabrics 00:08:33.358 rmmod nvme_keyring 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3374237 ']' 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3374237 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3374237 ']' 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3374237 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3374237 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3374237' 00:08:33.619 killing process with pid 3374237 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3374237 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3374237 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.619 11:20:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.161 11:20:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.161 00:08:36.161 real 0m10.949s 00:08:36.161 user 0m8.118s 00:08:36.161 sys 0m5.507s 00:08:36.161 11:20:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.161 11:20:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:36.161 ************************************ 00:08:36.161 END TEST nvmf_target_discovery 00:08:36.161 ************************************ 00:08:36.161 11:20:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.161 11:20:04 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:36.161 11:20:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.161 11:20:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.161 11:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.161 ************************************ 00:08:36.161 START TEST nvmf_referrals 00:08:36.161 ************************************ 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:36.161 * Looking for test storage... 00:08:36.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.161 11:20:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:42.813 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:42.813 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:42.813 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:42.813 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.813 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.814 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:08:43.074 00:08:43.074 --- 10.0.0.2 ping statistics --- 00:08:43.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.074 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:08:43.074 00:08:43.074 --- 10.0.0.1 ping statistics --- 00:08:43.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.074 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3378604 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3378604 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3378604 ']' 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.074 11:20:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.074 [2024-07-15 11:20:11.755927] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:08:43.074 [2024-07-15 11:20:11.755980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.334 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.334 [2024-07-15 11:20:11.823759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.334 [2024-07-15 11:20:11.892375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.334 [2024-07-15 11:20:11.892415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.334 [2024-07-15 11:20:11.892422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.334 [2024-07-15 11:20:11.892429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.334 [2024-07-15 11:20:11.892434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.334 [2024-07-15 11:20:11.892570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.334 [2024-07-15 11:20:11.892687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.334 [2024-07-15 11:20:11.892842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.334 [2024-07-15 11:20:11.892843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.905 [2024-07-15 11:20:12.580819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.905 [2024-07-15 11:20:12.597005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.905 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.165 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.425 11:20:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.425 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.686 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.945 11:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.205 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:45.465 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:45.465 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:45.465 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:45.465 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:45.465 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.465 11:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:45.465 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.725 rmmod nvme_tcp 00:08:45.725 rmmod nvme_fabrics 00:08:45.725 rmmod nvme_keyring 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:45.725 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3378604 ']' 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3378604 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3378604 ']' 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3378604 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378604 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378604' 00:08:45.726 killing process with pid 3378604 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3378604 00:08:45.726 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3378604 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.986 11:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.898 11:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.898 00:08:47.898 real 0m12.123s 00:08:47.898 user 0m13.266s 00:08:47.898 sys 0m5.970s 00:08:47.898 11:20:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.898 11:20:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:47.898 ************************************ 00:08:47.898 END TEST nvmf_referrals 00:08:47.898 ************************************ 00:08:47.898 11:20:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:47.898 11:20:16 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:47.898 11:20:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.898 11:20:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.898 11:20:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.159 ************************************ 00:08:48.159 START TEST nvmf_connect_disconnect 00:08:48.159 ************************************ 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:48.159 * Looking for test storage... 00:08:48.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.159 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.160 11:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:56.304 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:56.304 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:56.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:56.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.304 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:08:56.304 00:08:56.304 --- 10.0.0.2 ping statistics --- 00:08:56.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.305 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:08:56.305 00:08:56.305 --- 10.0.0.1 ping statistics --- 00:08:56.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.305 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3383372 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3383372 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3383372 ']' 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.305 11:20:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 [2024-07-15 11:20:23.953520] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:08:56.305 [2024-07-15 11:20:23.953572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.305 [2024-07-15 11:20:24.020814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.305 [2024-07-15 11:20:24.088383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.305 [2024-07-15 11:20:24.088421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.305 [2024-07-15 11:20:24.088429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.305 [2024-07-15 11:20:24.088435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.305 [2024-07-15 11:20:24.088440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.305 [2024-07-15 11:20:24.088576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.305 [2024-07-15 11:20:24.088691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.305 [2024-07-15 11:20:24.088845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.305 [2024-07-15 11:20:24.088847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 [2024-07-15 11:20:24.778770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:56.305 [2024-07-15 11:20:24.838204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:56.305 11:20:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:00.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.649 rmmod nvme_tcp 00:09:14.649 rmmod nvme_fabrics 00:09:14.649 rmmod nvme_keyring 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3383372 ']' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3383372 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3383372 ']' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3383372 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3383372 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3383372' 00:09:14.649 killing process with pid 3383372 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3383372 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3383372 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.649 11:20:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.195 11:20:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.195 00:09:17.195 real 0m28.801s 00:09:17.195 user 1m18.734s 00:09:17.195 sys 0m6.541s 00:09:17.195 11:20:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.195 11:20:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:17.195 ************************************ 00:09:17.195 END TEST nvmf_connect_disconnect 00:09:17.195 ************************************ 00:09:17.195 11:20:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:17.195 11:20:45 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:17.195 11:20:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.195 11:20:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.195 11:20:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.195 ************************************ 00:09:17.195 START TEST nvmf_multitarget 00:09:17.195 ************************************ 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:17.195 * Looking for test storage... 00:09:17.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.195 11:20:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:23.783 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:23.783 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:23.783 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.783 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:23.783 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.784 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:09:24.045 00:09:24.045 --- 10.0.0.2 ping statistics --- 00:09:24.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.045 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:09:24.045 00:09:24.045 --- 10.0.0.1 ping statistics --- 00:09:24.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.045 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3391498 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3391498 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3391498 ']' 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.045 11:20:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 [2024-07-15 11:20:52.733251] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:09:24.045 [2024-07-15 11:20:52.733313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.305 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.306 [2024-07-15 11:20:52.803370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.306 [2024-07-15 11:20:52.878516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.306 [2024-07-15 11:20:52.878554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.306 [2024-07-15 11:20:52.878562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.306 [2024-07-15 11:20:52.878568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.306 [2024-07-15 11:20:52.878574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.306 [2024-07-15 11:20:52.878715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.306 [2024-07-15 11:20:52.878829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.306 [2024-07-15 11:20:52.878985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.306 [2024-07-15 11:20:52.878987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:24.876 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:25.136 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:25.136 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:25.136 "nvmf_tgt_1" 00:09:25.136 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:25.398 "nvmf_tgt_2" 00:09:25.398 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:25.398 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:25.398 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:25.398 11:20:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:25.398 true 00:09:25.398 11:20:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:25.658 true 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:25.658 rmmod nvme_tcp 00:09:25.658 rmmod nvme_fabrics 00:09:25.658 rmmod nvme_keyring 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3391498 ']' 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3391498 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3391498 ']' 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3391498 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.658 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3391498 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3391498' 00:09:25.919 killing process with pid 3391498 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3391498 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3391498 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.919 11:20:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.466 11:20:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.466 00:09:28.466 real 0m11.106s 00:09:28.466 user 0m9.331s 00:09:28.466 sys 0m5.651s 00:09:28.466 11:20:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.466 11:20:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:28.466 ************************************ 00:09:28.466 END TEST nvmf_multitarget 00:09:28.466 ************************************ 00:09:28.466 11:20:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.466 11:20:56 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:28.466 11:20:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.466 11:20:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.466 11:20:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.466 ************************************ 00:09:28.466 START TEST nvmf_rpc 00:09:28.466 ************************************ 00:09:28.466 11:20:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:28.466 * Looking for test storage... 00:09:28.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.466 11:20:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.466 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:28.466 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.467 11:20:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:35.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:35.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:35.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:35.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.097 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:09:35.357 00:09:35.357 --- 10.0.0.2 ping statistics --- 00:09:35.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.357 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:09:35.357 00:09:35.357 --- 10.0.0.1 ping statistics --- 00:09:35.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.357 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3395981 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3395981 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3395981 ']' 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.357 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.358 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.358 11:21:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.358 [2024-07-15 11:21:04.029249] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:09:35.358 [2024-07-15 11:21:04.029298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.618 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.618 [2024-07-15 11:21:04.095128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.618 [2024-07-15 11:21:04.160115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.618 [2024-07-15 11:21:04.160158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.618 [2024-07-15 11:21:04.160166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.618 [2024-07-15 11:21:04.160172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.618 [2024-07-15 11:21:04.160178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.618 [2024-07-15 11:21:04.160248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.618 [2024-07-15 11:21:04.160348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.618 [2024-07-15 11:21:04.160485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.618 [2024-07-15 11:21:04.160487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:36.189 "tick_rate": 2400000000, 00:09:36.189 "poll_groups": [ 00:09:36.189 { 00:09:36.189 "name": "nvmf_tgt_poll_group_000", 00:09:36.189 "admin_qpairs": 0, 00:09:36.189 "io_qpairs": 0, 00:09:36.189 "current_admin_qpairs": 0, 00:09:36.189 "current_io_qpairs": 0, 00:09:36.189 "pending_bdev_io": 0, 00:09:36.189 "completed_nvme_io": 0, 00:09:36.189 "transports": [] 00:09:36.189 }, 00:09:36.189 { 00:09:36.189 "name": "nvmf_tgt_poll_group_001", 00:09:36.189 "admin_qpairs": 0, 00:09:36.189 "io_qpairs": 0, 00:09:36.189 "current_admin_qpairs": 0, 00:09:36.189 "current_io_qpairs": 0, 00:09:36.189 "pending_bdev_io": 0, 00:09:36.189 "completed_nvme_io": 0, 00:09:36.189 "transports": [] 00:09:36.189 }, 00:09:36.189 { 00:09:36.189 "name": "nvmf_tgt_poll_group_002", 00:09:36.189 "admin_qpairs": 0, 00:09:36.189 "io_qpairs": 0, 00:09:36.189 "current_admin_qpairs": 0, 00:09:36.189 "current_io_qpairs": 0, 00:09:36.189 "pending_bdev_io": 0, 00:09:36.189 "completed_nvme_io": 0, 00:09:36.189 "transports": [] 00:09:36.189 }, 00:09:36.189 { 00:09:36.189 "name": "nvmf_tgt_poll_group_003", 00:09:36.189 "admin_qpairs": 0, 00:09:36.189 "io_qpairs": 0, 00:09:36.189 "current_admin_qpairs": 0, 00:09:36.189 "current_io_qpairs": 0, 00:09:36.189 "pending_bdev_io": 0, 00:09:36.189 "completed_nvme_io": 0, 00:09:36.189 "transports": [] 00:09:36.189 } 00:09:36.189 ] 00:09:36.189 }' 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:36.189 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:36.449 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:36.449 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:36.449 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:36.449 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.449 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.449 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.449 [2024-07-15 11:21:04.965177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:36.450 "tick_rate": 2400000000, 00:09:36.450 "poll_groups": [ 00:09:36.450 { 00:09:36.450 "name": "nvmf_tgt_poll_group_000", 00:09:36.450 "admin_qpairs": 0, 00:09:36.450 "io_qpairs": 0, 00:09:36.450 "current_admin_qpairs": 0, 00:09:36.450 "current_io_qpairs": 0, 00:09:36.450 "pending_bdev_io": 0, 00:09:36.450 "completed_nvme_io": 0, 00:09:36.450 "transports": [ 00:09:36.450 { 00:09:36.450 "trtype": "TCP" 00:09:36.450 } 00:09:36.450 ] 00:09:36.450 }, 00:09:36.450 { 00:09:36.450 "name": "nvmf_tgt_poll_group_001", 00:09:36.450 "admin_qpairs": 0, 00:09:36.450 "io_qpairs": 0, 00:09:36.450 "current_admin_qpairs": 0, 00:09:36.450 "current_io_qpairs": 0, 00:09:36.450 "pending_bdev_io": 0, 00:09:36.450 "completed_nvme_io": 0, 00:09:36.450 "transports": [ 00:09:36.450 { 00:09:36.450 "trtype": "TCP" 00:09:36.450 } 00:09:36.450 ] 00:09:36.450 }, 00:09:36.450 { 00:09:36.450 "name": "nvmf_tgt_poll_group_002", 00:09:36.450 "admin_qpairs": 0, 00:09:36.450 "io_qpairs": 0, 00:09:36.450 "current_admin_qpairs": 0, 00:09:36.450 "current_io_qpairs": 0, 00:09:36.450 "pending_bdev_io": 0, 00:09:36.450 "completed_nvme_io": 0, 00:09:36.450 "transports": [ 00:09:36.450 { 00:09:36.450 "trtype": "TCP" 00:09:36.450 } 00:09:36.450 ] 00:09:36.450 }, 00:09:36.450 { 00:09:36.450 "name": "nvmf_tgt_poll_group_003", 00:09:36.450 "admin_qpairs": 0, 00:09:36.450 "io_qpairs": 0, 00:09:36.450 "current_admin_qpairs": 0, 00:09:36.450 "current_io_qpairs": 0, 00:09:36.450 "pending_bdev_io": 0, 00:09:36.450 "completed_nvme_io": 0, 00:09:36.450 "transports": [ 00:09:36.450 { 00:09:36.450 "trtype": "TCP" 00:09:36.450 } 00:09:36.450 ] 00:09:36.450 } 00:09:36.450 ] 00:09:36.450 }' 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:36.450 11:21:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.450 Malloc1 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.450 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.710 [2024-07-15 11:21:05.153226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:36.710 [2024-07-15 11:21:05.180068] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:36.710 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:36.710 could not add new controller: failed to write to nvme-fabrics device 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.710 11:21:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.092 11:21:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.092 11:21:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.092 11:21:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.092 11:21:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:38.092 11:21:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.637 [2024-07-15 11:21:08.938148] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:40.637 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:40.637 could not add new controller: failed to write to nvme-fabrics device 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.637 11:21:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.023 11:21:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.023 11:21:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:42.023 11:21:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.023 11:21:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:42.023 11:21:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.958 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.958 [2024-07-15 11:21:12.656885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.219 11:21:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.603 11:21:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.603 11:21:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:45.603 11:21:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.603 11:21:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:45.603 11:21:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:48.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:48.149 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.150 [2024-07-15 11:21:16.405003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.150 11:21:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.535 11:21:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.535 11:21:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:49.535 11:21:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.535 11:21:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:49.535 11:21:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:51.447 11:21:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.447 [2024-07-15 11:21:20.116235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.447 11:21:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.399 11:21:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.399 11:21:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:53.399 11:21:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.399 11:21:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:53.399 11:21:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.314 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.315 [2024-07-15 11:21:23.836576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.315 11:21:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:56.700 11:21:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:56.700 11:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:56.701 11:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:56.701 11:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:56.701 11:21:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.245 [2024-07-15 11:21:27.553845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.245 11:21:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.628 11:21:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.628 11:21:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.628 11:21:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.628 11:21:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.628 11:21:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:02.537 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 [2024-07-15 11:21:31.317570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 [2024-07-15 11:21:31.377697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 [2024-07-15 11:21:31.437883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.798 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.798 [2024-07-15 11:21:31.498084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 [2024-07-15 11:21:31.558282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.059 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:03.059 "tick_rate": 2400000000, 00:10:03.059 "poll_groups": [ 00:10:03.059 { 00:10:03.059 "name": "nvmf_tgt_poll_group_000", 00:10:03.059 "admin_qpairs": 0, 00:10:03.059 "io_qpairs": 224, 00:10:03.059 "current_admin_qpairs": 0, 00:10:03.059 "current_io_qpairs": 0, 00:10:03.059 "pending_bdev_io": 0, 00:10:03.059 "completed_nvme_io": 274, 00:10:03.059 "transports": [ 00:10:03.059 { 00:10:03.059 "trtype": "TCP" 00:10:03.059 } 00:10:03.059 ] 00:10:03.059 }, 00:10:03.059 { 00:10:03.059 "name": "nvmf_tgt_poll_group_001", 00:10:03.059 "admin_qpairs": 1, 00:10:03.059 "io_qpairs": 223, 00:10:03.059 "current_admin_qpairs": 0, 00:10:03.059 "current_io_qpairs": 0, 00:10:03.059 "pending_bdev_io": 0, 00:10:03.059 "completed_nvme_io": 463, 00:10:03.059 "transports": [ 00:10:03.059 { 00:10:03.059 "trtype": "TCP" 00:10:03.059 } 00:10:03.059 ] 00:10:03.060 }, 00:10:03.060 { 00:10:03.060 "name": "nvmf_tgt_poll_group_002", 00:10:03.060 "admin_qpairs": 6, 00:10:03.060 "io_qpairs": 218, 00:10:03.060 "current_admin_qpairs": 0, 00:10:03.060 "current_io_qpairs": 0, 00:10:03.060 "pending_bdev_io": 0, 00:10:03.060 "completed_nvme_io": 277, 00:10:03.060 "transports": [ 00:10:03.060 { 00:10:03.060 "trtype": "TCP" 00:10:03.060 } 00:10:03.060 ] 00:10:03.060 }, 00:10:03.060 { 00:10:03.060 "name": "nvmf_tgt_poll_group_003", 00:10:03.060 "admin_qpairs": 0, 00:10:03.060 "io_qpairs": 224, 00:10:03.060 "current_admin_qpairs": 0, 00:10:03.060 "current_io_qpairs": 0, 00:10:03.060 "pending_bdev_io": 0, 00:10:03.060 "completed_nvme_io": 225, 00:10:03.060 "transports": [ 00:10:03.060 { 00:10:03.060 "trtype": "TCP" 00:10:03.060 } 00:10:03.060 ] 00:10:03.060 } 00:10:03.060 ] 00:10:03.060 }' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.060 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.060 rmmod nvme_tcp 00:10:03.060 rmmod nvme_fabrics 00:10:03.320 rmmod nvme_keyring 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3395981 ']' 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3395981 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3395981 ']' 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3395981 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3395981 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3395981' 00:10:03.320 killing process with pid 3395981 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3395981 00:10:03.320 11:21:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3395981 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.320 11:21:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.864 11:21:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.864 00:10:05.864 real 0m37.389s 00:10:05.864 user 1m53.220s 00:10:05.864 sys 0m7.205s 00:10:05.864 11:21:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.864 11:21:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 ************************************ 00:10:05.864 END TEST nvmf_rpc 00:10:05.864 ************************************ 00:10:05.864 11:21:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:05.864 11:21:34 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:05.864 11:21:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:05.864 11:21:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.864 11:21:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.864 ************************************ 00:10:05.864 START TEST nvmf_invalid 00:10:05.864 ************************************ 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:05.864 * Looking for test storage... 00:10:05.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.864 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.865 11:21:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:12.454 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:12.454 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.454 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:12.455 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:12.455 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.455 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.714 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:12.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:10:12.974 00:10:12.974 --- 10.0.0.2 ping statistics --- 00:10:12.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.974 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:10:12.974 00:10:12.974 --- 10.0.0.1 ping statistics --- 00:10:12.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.974 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3406292 00:10:12.974 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3406292 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3406292 ']' 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.975 11:21:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:12.975 [2024-07-15 11:21:41.538604] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:10:12.975 [2024-07-15 11:21:41.538657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.975 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.975 [2024-07-15 11:21:41.606386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.975 [2024-07-15 11:21:41.674645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.975 [2024-07-15 11:21:41.674683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.975 [2024-07-15 11:21:41.674691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.975 [2024-07-15 11:21:41.674697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.975 [2024-07-15 11:21:41.674703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.975 [2024-07-15 11:21:41.674849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.975 [2024-07-15 11:21:41.674962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.975 [2024-07-15 11:21:41.675116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.975 [2024-07-15 11:21:41.675118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23373 00:10:13.917 [2024-07-15 11:21:42.493116] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:13.917 { 00:10:13.917 "nqn": "nqn.2016-06.io.spdk:cnode23373", 00:10:13.917 "tgt_name": "foobar", 00:10:13.917 "method": "nvmf_create_subsystem", 00:10:13.917 "req_id": 1 00:10:13.917 } 00:10:13.917 Got JSON-RPC error response 00:10:13.917 response: 00:10:13.917 { 00:10:13.917 "code": -32603, 00:10:13.917 "message": "Unable to find target foobar" 00:10:13.917 }' 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:13.917 { 00:10:13.917 "nqn": "nqn.2016-06.io.spdk:cnode23373", 00:10:13.917 "tgt_name": "foobar", 00:10:13.917 "method": "nvmf_create_subsystem", 00:10:13.917 "req_id": 1 00:10:13.917 } 00:10:13.917 Got JSON-RPC error response 00:10:13.917 response: 00:10:13.917 { 00:10:13.917 "code": -32603, 00:10:13.917 "message": "Unable to find target foobar" 00:10:13.917 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:13.917 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2324 00:10:14.178 [2024-07-15 11:21:42.669681] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2324: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:14.178 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:14.178 { 00:10:14.178 "nqn": "nqn.2016-06.io.spdk:cnode2324", 00:10:14.178 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:14.178 "method": "nvmf_create_subsystem", 00:10:14.178 "req_id": 1 00:10:14.178 } 00:10:14.178 Got JSON-RPC error response 00:10:14.178 response: 00:10:14.178 { 00:10:14.178 "code": -32602, 00:10:14.178 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:14.178 }' 00:10:14.178 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:14.178 { 00:10:14.178 "nqn": "nqn.2016-06.io.spdk:cnode2324", 00:10:14.178 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:14.178 "method": "nvmf_create_subsystem", 00:10:14.178 "req_id": 1 00:10:14.178 } 00:10:14.178 Got JSON-RPC error response 00:10:14.178 response: 00:10:14.178 { 00:10:14.178 "code": -32602, 00:10:14.178 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:14.178 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:14.178 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:14.178 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13366 00:10:14.178 [2024-07-15 11:21:42.850275] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13366: invalid model number 'SPDK_Controller' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:14.439 { 00:10:14.439 "nqn": "nqn.2016-06.io.spdk:cnode13366", 00:10:14.439 "model_number": "SPDK_Controller\u001f", 00:10:14.439 "method": "nvmf_create_subsystem", 00:10:14.439 "req_id": 1 00:10:14.439 } 00:10:14.439 Got JSON-RPC error response 00:10:14.439 response: 00:10:14.439 { 00:10:14.439 "code": -32602, 00:10:14.439 "message": "Invalid MN SPDK_Controller\u001f" 00:10:14.439 }' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:14.439 { 00:10:14.439 "nqn": "nqn.2016-06.io.spdk:cnode13366", 00:10:14.439 "model_number": "SPDK_Controller\u001f", 00:10:14.439 "method": "nvmf_create_subsystem", 00:10:14.439 "req_id": 1 00:10:14.439 } 00:10:14.439 Got JSON-RPC error response 00:10:14.439 response: 00:10:14.439 { 00:10:14.439 "code": -32602, 00:10:14.439 "message": "Invalid MN SPDK_Controller\u001f" 00:10:14.439 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:14.439 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'w)OiLdhojV3z0RqsC&}=~' 00:10:14.440 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'w)OiLdhojV3z0RqsC&}=~' nqn.2016-06.io.spdk:cnode10226 00:10:14.700 [2024-07-15 11:21:43.183285] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10226: invalid serial number 'w)OiLdhojV3z0RqsC&}=~' 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:14.700 { 00:10:14.700 "nqn": "nqn.2016-06.io.spdk:cnode10226", 00:10:14.700 "serial_number": "w)OiLdhojV3z0RqsC&}=~", 00:10:14.700 "method": "nvmf_create_subsystem", 00:10:14.700 "req_id": 1 00:10:14.700 } 00:10:14.700 Got JSON-RPC error response 00:10:14.700 response: 00:10:14.700 { 00:10:14.700 "code": -32602, 00:10:14.700 "message": "Invalid SN w)OiLdhojV3z0RqsC&}=~" 00:10:14.700 }' 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:14.700 { 00:10:14.700 "nqn": "nqn.2016-06.io.spdk:cnode10226", 00:10:14.700 "serial_number": "w)OiLdhojV3z0RqsC&}=~", 00:10:14.700 "method": "nvmf_create_subsystem", 00:10:14.700 "req_id": 1 00:10:14.700 } 00:10:14.700 Got JSON-RPC error response 00:10:14.700 response: 00:10:14.700 { 00:10:14.700 "code": -32602, 00:10:14.700 "message": "Invalid SN w)OiLdhojV3z0RqsC&}=~" 00:10:14.700 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.700 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.701 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.702 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:14.962 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV' 00:10:14.963 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV' nqn.2016-06.io.spdk:cnode13516 00:10:15.249 [2024-07-15 11:21:43.664835] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13516: invalid model number 'sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV' 00:10:15.249 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:15.249 { 00:10:15.249 "nqn": "nqn.2016-06.io.spdk:cnode13516", 00:10:15.249 "model_number": "sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV", 00:10:15.249 "method": "nvmf_create_subsystem", 00:10:15.249 "req_id": 1 00:10:15.249 } 00:10:15.249 Got JSON-RPC error response 00:10:15.249 response: 00:10:15.249 { 00:10:15.249 "code": -32602, 00:10:15.249 "message": "Invalid MN sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV" 00:10:15.249 }' 00:10:15.249 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:15.249 { 00:10:15.249 "nqn": "nqn.2016-06.io.spdk:cnode13516", 00:10:15.249 "model_number": "sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV", 00:10:15.249 "method": "nvmf_create_subsystem", 00:10:15.249 "req_id": 1 00:10:15.249 } 00:10:15.249 Got JSON-RPC error response 00:10:15.249 response: 00:10:15.249 { 00:10:15.249 "code": -32602, 00:10:15.249 "message": "Invalid MN sDM&eIWt0#F2xzFTR{#/,D^eDr65CgNMQ9k32[@cV" 00:10:15.249 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:15.249 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:15.249 [2024-07-15 11:21:43.837465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.249 11:21:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:15.512 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:15.512 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:15.512 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:15.512 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:15.512 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:15.512 [2024-07-15 11:21:44.188037] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:15.772 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:15.772 { 00:10:15.772 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:15.772 "listen_address": { 00:10:15.772 "trtype": "tcp", 00:10:15.772 "traddr": "", 00:10:15.772 "trsvcid": "4421" 00:10:15.772 }, 00:10:15.772 "method": "nvmf_subsystem_remove_listener", 00:10:15.772 "req_id": 1 00:10:15.772 } 00:10:15.772 Got JSON-RPC error response 00:10:15.772 response: 00:10:15.772 { 00:10:15.772 "code": -32602, 00:10:15.772 "message": "Invalid parameters" 00:10:15.772 }' 00:10:15.772 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:15.772 { 00:10:15.772 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:15.772 "listen_address": { 00:10:15.772 "trtype": "tcp", 00:10:15.772 "traddr": "", 00:10:15.772 "trsvcid": "4421" 00:10:15.772 }, 00:10:15.772 "method": "nvmf_subsystem_remove_listener", 00:10:15.772 "req_id": 1 00:10:15.772 } 00:10:15.772 Got JSON-RPC error response 00:10:15.772 response: 00:10:15.772 { 00:10:15.772 "code": -32602, 00:10:15.772 "message": "Invalid parameters" 00:10:15.772 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:15.772 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13322 -i 0 00:10:15.772 [2024-07-15 11:21:44.364566] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13322: invalid cntlid range [0-65519] 00:10:15.772 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:15.772 { 00:10:15.772 "nqn": "nqn.2016-06.io.spdk:cnode13322", 00:10:15.772 "min_cntlid": 0, 00:10:15.772 "method": "nvmf_create_subsystem", 00:10:15.772 "req_id": 1 00:10:15.772 } 00:10:15.772 Got JSON-RPC error response 00:10:15.772 response: 00:10:15.772 { 00:10:15.772 "code": -32602, 00:10:15.772 "message": "Invalid cntlid range [0-65519]" 00:10:15.772 }' 00:10:15.772 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:15.772 { 00:10:15.772 "nqn": "nqn.2016-06.io.spdk:cnode13322", 00:10:15.772 "min_cntlid": 0, 00:10:15.772 "method": "nvmf_create_subsystem", 00:10:15.772 "req_id": 1 00:10:15.772 } 00:10:15.772 Got JSON-RPC error response 00:10:15.772 response: 00:10:15.772 { 00:10:15.772 "code": -32602, 00:10:15.772 "message": "Invalid cntlid range [0-65519]" 00:10:15.772 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:15.772 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13574 -i 65520 00:10:16.032 [2024-07-15 11:21:44.537106] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13574: invalid cntlid range [65520-65519] 00:10:16.032 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:16.032 { 00:10:16.032 "nqn": "nqn.2016-06.io.spdk:cnode13574", 00:10:16.032 "min_cntlid": 65520, 00:10:16.032 "method": "nvmf_create_subsystem", 00:10:16.032 "req_id": 1 00:10:16.032 } 00:10:16.032 Got JSON-RPC error response 00:10:16.032 response: 00:10:16.032 { 00:10:16.032 "code": -32602, 00:10:16.032 "message": "Invalid cntlid range [65520-65519]" 00:10:16.032 }' 00:10:16.032 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:16.032 { 00:10:16.032 "nqn": "nqn.2016-06.io.spdk:cnode13574", 00:10:16.032 "min_cntlid": 65520, 00:10:16.032 "method": "nvmf_create_subsystem", 00:10:16.032 "req_id": 1 00:10:16.032 } 00:10:16.032 Got JSON-RPC error response 00:10:16.032 response: 00:10:16.032 { 00:10:16.032 "code": -32602, 00:10:16.032 "message": "Invalid cntlid range [65520-65519]" 00:10:16.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.032 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17689 -I 0 00:10:16.032 [2024-07-15 11:21:44.697658] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17689: invalid cntlid range [1-0] 00:10:16.032 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:16.032 { 00:10:16.032 "nqn": "nqn.2016-06.io.spdk:cnode17689", 00:10:16.032 "max_cntlid": 0, 00:10:16.032 "method": "nvmf_create_subsystem", 00:10:16.032 "req_id": 1 00:10:16.032 } 00:10:16.032 Got JSON-RPC error response 00:10:16.032 response: 00:10:16.032 { 00:10:16.032 "code": -32602, 00:10:16.032 "message": "Invalid cntlid range [1-0]" 00:10:16.032 }' 00:10:16.032 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:16.032 { 00:10:16.032 "nqn": "nqn.2016-06.io.spdk:cnode17689", 00:10:16.032 "max_cntlid": 0, 00:10:16.032 "method": "nvmf_create_subsystem", 00:10:16.032 "req_id": 1 00:10:16.032 } 00:10:16.032 Got JSON-RPC error response 00:10:16.032 response: 00:10:16.032 { 00:10:16.032 "code": -32602, 00:10:16.032 "message": "Invalid cntlid range [1-0]" 00:10:16.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.032 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7341 -I 65520 00:10:16.292 [2024-07-15 11:21:44.870174] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7341: invalid cntlid range [1-65520] 00:10:16.292 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:16.292 { 00:10:16.292 "nqn": "nqn.2016-06.io.spdk:cnode7341", 00:10:16.292 "max_cntlid": 65520, 00:10:16.292 "method": "nvmf_create_subsystem", 00:10:16.292 "req_id": 1 00:10:16.292 } 00:10:16.292 Got JSON-RPC error response 00:10:16.292 response: 00:10:16.292 { 00:10:16.292 "code": -32602, 00:10:16.292 "message": "Invalid cntlid range [1-65520]" 00:10:16.292 }' 00:10:16.292 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:16.292 { 00:10:16.292 "nqn": "nqn.2016-06.io.spdk:cnode7341", 00:10:16.292 "max_cntlid": 65520, 00:10:16.292 "method": "nvmf_create_subsystem", 00:10:16.292 "req_id": 1 00:10:16.292 } 00:10:16.292 Got JSON-RPC error response 00:10:16.292 response: 00:10:16.292 { 00:10:16.292 "code": -32602, 00:10:16.292 "message": "Invalid cntlid range [1-65520]" 00:10:16.292 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.292 11:21:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22214 -i 6 -I 5 00:10:16.551 [2024-07-15 11:21:45.046733] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22214: invalid cntlid range [6-5] 00:10:16.551 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:16.551 { 00:10:16.551 "nqn": "nqn.2016-06.io.spdk:cnode22214", 00:10:16.551 "min_cntlid": 6, 00:10:16.551 "max_cntlid": 5, 00:10:16.551 "method": "nvmf_create_subsystem", 00:10:16.551 "req_id": 1 00:10:16.551 } 00:10:16.551 Got JSON-RPC error response 00:10:16.551 response: 00:10:16.551 { 00:10:16.551 "code": -32602, 00:10:16.551 "message": "Invalid cntlid range [6-5]" 00:10:16.551 }' 00:10:16.551 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:16.551 { 00:10:16.551 "nqn": "nqn.2016-06.io.spdk:cnode22214", 00:10:16.551 "min_cntlid": 6, 00:10:16.551 "max_cntlid": 5, 00:10:16.551 "method": "nvmf_create_subsystem", 00:10:16.551 "req_id": 1 00:10:16.551 } 00:10:16.551 Got JSON-RPC error response 00:10:16.551 response: 00:10:16.551 { 00:10:16.551 "code": -32602, 00:10:16.551 "message": "Invalid cntlid range [6-5]" 00:10:16.551 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.551 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:16.551 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:16.551 { 00:10:16.551 "name": "foobar", 00:10:16.551 "method": "nvmf_delete_target", 00:10:16.551 "req_id": 1 00:10:16.551 } 00:10:16.551 Got JSON-RPC error response 00:10:16.551 response: 00:10:16.551 { 00:10:16.551 "code": -32602, 00:10:16.551 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:16.551 }' 00:10:16.551 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:16.551 { 00:10:16.551 "name": "foobar", 00:10:16.551 "method": "nvmf_delete_target", 00:10:16.551 "req_id": 1 00:10:16.551 } 00:10:16.551 Got JSON-RPC error response 00:10:16.551 response: 00:10:16.551 { 00:10:16.551 "code": -32602, 00:10:16.551 "message": "The specified target doesn't exist, cannot delete it." 00:10:16.551 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.552 rmmod nvme_tcp 00:10:16.552 rmmod nvme_fabrics 00:10:16.552 rmmod nvme_keyring 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3406292 ']' 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3406292 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3406292 ']' 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3406292 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.552 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3406292 00:10:16.811 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3406292' 00:10:16.812 killing process with pid 3406292 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3406292 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3406292 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.812 11:21:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.356 11:21:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.356 00:10:19.356 real 0m13.344s 00:10:19.356 user 0m19.192s 00:10:19.356 sys 0m6.245s 00:10:19.356 11:21:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.356 11:21:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:19.356 ************************************ 00:10:19.356 END TEST nvmf_invalid 00:10:19.356 ************************************ 00:10:19.356 11:21:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:19.356 11:21:47 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:19.356 11:21:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.356 11:21:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.356 11:21:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.356 ************************************ 00:10:19.356 START TEST nvmf_abort 00:10:19.356 ************************************ 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:19.356 * Looking for test storage... 00:10:19.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.356 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.357 11:21:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:25.947 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:25.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:25.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:25.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:25.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.948 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:10:26.209 00:10:26.209 --- 10.0.0.2 ping statistics --- 00:10:26.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.209 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:10:26.209 00:10:26.209 --- 10.0.0.1 ping statistics --- 00:10:26.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.209 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3411313 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3411313 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3411313 ']' 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.209 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.210 11:21:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.210 [2024-07-15 11:21:54.784093] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:10:26.210 [2024-07-15 11:21:54.784147] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.210 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.210 [2024-07-15 11:21:54.866301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.469 [2024-07-15 11:21:54.930518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.469 [2024-07-15 11:21:54.930556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.469 [2024-07-15 11:21:54.930563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.469 [2024-07-15 11:21:54.930570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.469 [2024-07-15 11:21:54.930575] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.469 [2024-07-15 11:21:54.930683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.469 [2024-07-15 11:21:54.930839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.469 [2024-07-15 11:21:54.930839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.038 [2024-07-15 11:21:55.638138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.038 Malloc0 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.038 Delay0 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.038 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:27.039 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.039 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.039 [2024-07-15 11:21:55.723554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.039 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.039 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:27.039 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.039 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:27.298 11:21:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.298 11:21:55 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:27.298 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.298 [2024-07-15 11:21:55.885341] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:29.835 Initializing NVMe Controllers 00:10:29.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:29.835 controller IO queue size 128 less than required 00:10:29.835 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:29.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:29.835 Initialization complete. Launching workers. 00:10:29.835 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31763 00:10:29.835 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31828, failed to submit 62 00:10:29.835 success 31767, unsuccess 61, failed 0 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.835 rmmod nvme_tcp 00:10:29.835 rmmod nvme_fabrics 00:10:29.835 rmmod nvme_keyring 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3411313 ']' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3411313 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3411313 ']' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3411313 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3411313 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3411313' 00:10:29.835 killing process with pid 3411313 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3411313 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3411313 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.835 11:21:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.746 11:22:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.746 00:10:31.746 real 0m12.821s 00:10:31.746 user 0m14.154s 00:10:31.746 sys 0m6.060s 00:10:31.746 11:22:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.746 11:22:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.746 ************************************ 00:10:31.746 END TEST nvmf_abort 00:10:31.746 ************************************ 00:10:31.746 11:22:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:31.746 11:22:00 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:31.746 11:22:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:31.746 11:22:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.746 11:22:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.007 ************************************ 00:10:32.007 START TEST nvmf_ns_hotplug_stress 00:10:32.007 ************************************ 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:32.007 * Looking for test storage... 00:10:32.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.007 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.008 11:22:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:40.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:40.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:40.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:40.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:10:40.143 00:10:40.143 --- 10.0.0.2 ping statistics --- 00:10:40.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.143 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:10:40.143 00:10:40.143 --- 10.0.0.1 ping statistics --- 00:10:40.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.143 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3416148 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3416148 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3416148 ']' 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.143 11:22:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.143 [2024-07-15 11:22:07.916017] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:10:40.143 [2024-07-15 11:22:07.916070] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.143 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.143 [2024-07-15 11:22:07.999540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.143 [2024-07-15 11:22:08.063686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.143 [2024-07-15 11:22:08.063724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.143 [2024-07-15 11:22:08.063732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.143 [2024-07-15 11:22:08.063738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.143 [2024-07-15 11:22:08.063744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.143 [2024-07-15 11:22:08.063855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.143 [2024-07-15 11:22:08.064016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.143 [2024-07-15 11:22:08.064017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:40.143 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:40.403 [2024-07-15 11:22:08.900060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.403 11:22:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:40.403 11:22:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.663 [2024-07-15 11:22:09.237462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.663 11:22:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.924 11:22:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:40.924 Malloc0 00:10:40.924 11:22:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:41.218 Delay0 00:10:41.218 11:22:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.478 11:22:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:41.478 NULL1 00:10:41.478 11:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:41.738 11:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3416657 00:10:41.738 11:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:41.738 11:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:41.738 11:22:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.738 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.120 Read completed with error (sct=0, sc=11) 00:10:43.120 11:22:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.120 11:22:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:43.120 11:22:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:43.120 true 00:10:43.120 11:22:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:43.120 11:22:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.060 11:22:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.320 11:22:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:44.320 11:22:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:44.320 true 00:10:44.320 11:22:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:44.320 11:22:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.580 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.840 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:44.840 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:44.840 true 00:10:44.840 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:44.840 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.099 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.099 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:45.099 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:45.358 true 00:10:45.358 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:45.358 11:22:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.618 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.618 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:45.618 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:45.878 true 00:10:45.878 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:45.878 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.137 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.137 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:46.137 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:46.395 true 00:10:46.395 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:46.395 11:22:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.655 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.655 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:46.655 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:46.916 true 00:10:46.916 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:46.916 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.916 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.176 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:47.176 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:47.436 true 00:10:47.436 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:47.436 11:22:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.436 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.696 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:47.696 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:47.956 true 00:10:47.956 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:47.956 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.956 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.218 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:48.218 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:48.218 true 00:10:48.478 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:48.478 11:22:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.478 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.738 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:48.738 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:48.738 true 00:10:48.738 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:48.738 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.999 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.259 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:49.259 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:49.259 true 00:10:49.259 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:49.259 11:22:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.519 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.780 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:49.780 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:49.780 true 00:10:49.780 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:49.780 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.040 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.040 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:50.040 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:50.301 true 00:10:50.301 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:50.301 11:22:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.562 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.562 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:50.562 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:50.822 true 00:10:50.822 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:50.822 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.082 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.082 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:51.082 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:51.343 true 00:10:51.343 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:51.343 11:22:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.604 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.604 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:51.604 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:51.864 true 00:10:51.864 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:51.864 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.125 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.125 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:52.125 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:52.385 true 00:10:52.385 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:52.385 11:22:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.646 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.646 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:52.646 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:52.906 true 00:10:52.906 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:52.906 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.906 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.166 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:53.166 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:53.426 true 00:10:53.426 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:53.426 11:22:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.369 11:22:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.369 11:22:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:54.369 11:22:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:54.369 true 00:10:54.629 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:54.629 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.629 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.888 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:54.888 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:54.888 true 00:10:54.888 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:54.888 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.185 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.185 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:55.185 11:22:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:55.446 true 00:10:55.446 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:55.446 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.760 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.760 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:55.760 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:56.021 true 00:10:56.021 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:56.021 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.021 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.281 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:56.281 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:56.281 true 00:10:56.281 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:56.281 11:22:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.221 11:22:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.481 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:57.481 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:57.481 true 00:10:57.747 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:57.747 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.747 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.008 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:58.008 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:58.008 true 00:10:58.008 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:58.008 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.268 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.529 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:58.529 11:22:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:58.529 true 00:10:58.529 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:58.530 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.790 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.790 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:58.790 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:59.050 true 00:10:59.050 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:59.050 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.309 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.309 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:59.309 11:22:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:59.569 true 00:10:59.569 11:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:10:59.569 11:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.512 11:22:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.512 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:00.512 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:00.773 true 00:11:00.773 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:00.773 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.773 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.034 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:01.034 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:01.294 true 00:11:01.294 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:01.294 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.294 11:22:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.554 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:01.554 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:01.554 true 00:11:01.815 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:01.815 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.815 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.075 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:02.075 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:02.075 true 00:11:02.075 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:02.075 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.335 11:22:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.595 11:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:02.595 11:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:02.595 true 00:11:02.595 11:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:02.595 11:22:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.536 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.797 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:03.797 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:03.797 true 00:11:03.797 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:03.797 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.058 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.319 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:04.319 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:04.319 true 00:11:04.319 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:04.319 11:22:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.580 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.840 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:04.840 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:04.840 true 00:11:04.840 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:04.840 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.100 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.361 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:05.361 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:05.361 true 00:11:05.361 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:05.361 11:22:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.622 11:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.622 11:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:05.622 11:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:05.882 true 00:11:05.882 11:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:05.882 11:22:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.825 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.825 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:06.825 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:07.085 true 00:11:07.085 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:07.085 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.346 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.346 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:07.346 11:22:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:07.607 true 00:11:07.607 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:07.607 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.868 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.868 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:07.868 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:08.129 true 00:11:08.129 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:08.129 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.129 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.390 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:08.390 11:22:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:08.650 true 00:11:08.650 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:08.650 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.650 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.911 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:08.911 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:08.911 true 00:11:09.173 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:09.173 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.173 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.433 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:09.433 11:22:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:09.433 true 00:11:09.433 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:09.433 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.706 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.018 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:10.018 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:10.018 true 00:11:10.018 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:10.018 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.277 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.277 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:10.277 11:22:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:10.538 true 00:11:10.538 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:10.538 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.800 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.800 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:10.800 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:11.061 true 00:11:11.061 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:11.061 11:22:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.002 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.002 Initializing NVMe Controllers 00:11:12.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:12.002 Controller IO queue size 128, less than required. 00:11:12.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:12.002 Controller IO queue size 128, less than required. 00:11:12.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:12.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:12.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:12.002 Initialization complete. Launching workers. 00:11:12.002 ======================================================== 00:11:12.002 Latency(us) 00:11:12.002 Device Information : IOPS MiB/s Average min max 00:11:12.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 320.14 0.16 117506.78 2439.59 1102497.31 00:11:12.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9148.13 4.47 13992.42 2263.75 404623.81 00:11:12.002 ======================================================== 00:11:12.002 Total : 9468.26 4.62 17492.40 2263.75 1102497.31 00:11:12.002 00:11:12.002 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:12.002 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:12.263 true 00:11:12.263 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3416657 00:11:12.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3416657) - No such process 00:11:12.263 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3416657 00:11:12.263 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.525 11:22:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.525 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:12.525 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:12.525 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:12.525 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.525 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:12.786 null0 00:11:12.786 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.786 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.786 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:12.786 null1 00:11:12.786 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.786 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.786 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:13.047 null2 00:11:13.047 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.047 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.047 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:13.307 null3 00:11:13.307 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.307 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.307 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:13.307 null4 00:11:13.307 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.307 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.307 11:22:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:13.567 null5 00:11:13.567 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.567 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.567 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:13.567 null6 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:13.829 null7 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3423057 3423059 3423060 3423062 3423065 3423068 3423070 3423073 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.829 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.091 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.352 11:22:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.352 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.352 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.352 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.613 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.614 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.614 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.614 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.614 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.874 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.135 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.136 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.397 11:22:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.397 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.397 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.397 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.398 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.658 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.659 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.920 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.316 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.317 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.317 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.317 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.317 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.317 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.317 11:22:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.603 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.876 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.137 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.398 rmmod nvme_tcp 00:11:17.398 rmmod nvme_fabrics 00:11:17.398 rmmod nvme_keyring 00:11:17.398 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3416148 ']' 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3416148 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3416148 ']' 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3416148 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416148 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416148' 00:11:17.659 killing process with pid 3416148 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3416148 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3416148 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.659 11:22:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.200 11:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:20.200 00:11:20.200 real 0m47.870s 00:11:20.200 user 3m11.313s 00:11:20.200 sys 0m15.068s 00:11:20.200 11:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.200 11:22:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.200 ************************************ 00:11:20.200 END TEST nvmf_ns_hotplug_stress 00:11:20.200 ************************************ 00:11:20.200 11:22:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:20.200 11:22:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:20.200 11:22:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.200 11:22:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.200 11:22:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.200 ************************************ 00:11:20.200 START TEST nvmf_connect_stress 00:11:20.200 ************************************ 00:11:20.200 11:22:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:20.201 * Looking for test storage... 00:11:20.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.201 11:22:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:26.784 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:26.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:26.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:26.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:26.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.785 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:11:27.046 00:11:27.046 --- 10.0.0.2 ping statistics --- 00:11:27.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.046 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:11:27.046 00:11:27.046 --- 10.0.0.1 ping statistics --- 00:11:27.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.046 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3428191 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3428191 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3428191 ']' 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.046 11:22:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.307 [2024-07-15 11:22:55.771081] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:11:27.307 [2024-07-15 11:22:55.771158] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.307 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.307 [2024-07-15 11:22:55.859355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.307 [2024-07-15 11:22:55.952717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.307 [2024-07-15 11:22:55.952782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.307 [2024-07-15 11:22:55.952790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.307 [2024-07-15 11:22:55.952797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.307 [2024-07-15 11:22:55.952803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.307 [2024-07-15 11:22:55.952941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.307 [2024-07-15 11:22:55.953108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.307 [2024-07-15 11:22:55.953108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.877 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.877 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:27.877 11:22:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.877 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:27.877 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.137 [2024-07-15 11:22:56.594058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.137 [2024-07-15 11:22:56.618509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.137 NULL1 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3428366 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:28.137 11:22:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.138 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.138 11:22:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.397 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.397 11:22:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:28.397 11:22:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.397 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.397 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.966 11:22:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:28.966 11:22:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.966 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.966 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.227 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.227 11:22:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:29.227 11:22:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.227 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.227 11:22:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.488 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.488 11:22:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:29.488 11:22:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.488 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.488 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.749 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.749 11:22:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:29.749 11:22:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.749 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.749 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.009 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.009 11:22:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:30.009 11:22:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.009 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.009 11:22:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.579 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.579 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:30.579 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.579 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.579 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.839 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.839 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:30.840 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.840 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.840 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.099 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:31.099 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.099 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.099 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.360 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.360 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:31.360 11:22:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.360 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.360 11:22:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.621 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.621 11:23:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:31.621 11:23:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.621 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.621 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.192 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.192 11:23:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:32.192 11:23:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.192 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.192 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.452 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.452 11:23:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:32.452 11:23:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.452 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.452 11:23:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.713 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.713 11:23:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:32.713 11:23:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.713 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.713 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.974 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.974 11:23:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:32.974 11:23:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.974 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.974 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.545 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.545 11:23:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:33.545 11:23:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.545 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.545 11:23:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.806 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.806 11:23:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:33.806 11:23:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.806 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.806 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.066 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.066 11:23:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:34.066 11:23:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.066 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.066 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.326 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.326 11:23:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:34.326 11:23:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.326 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.326 11:23:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.586 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.586 11:23:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:34.586 11:23:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.586 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.586 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.156 11:23:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:35.156 11:23:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.156 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.156 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.416 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.416 11:23:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:35.416 11:23:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.416 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.416 11:23:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.676 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.676 11:23:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:35.676 11:23:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.676 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.676 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.935 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.935 11:23:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:35.935 11:23:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.935 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.935 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.195 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.195 11:23:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:36.195 11:23:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.195 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.195 11:23:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.766 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.766 11:23:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:36.766 11:23:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.766 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.766 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.027 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.027 11:23:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:37.027 11:23:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.027 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.027 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.288 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.288 11:23:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:37.288 11:23:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.288 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.288 11:23:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.549 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.549 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:37.549 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.549 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.549 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.810 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.810 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:37.810 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.810 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.810 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.072 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3428366 00:11:38.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3428366) - No such process 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3428366 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.334 rmmod nvme_tcp 00:11:38.334 rmmod nvme_fabrics 00:11:38.334 rmmod nvme_keyring 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3428191 ']' 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3428191 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3428191 ']' 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3428191 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3428191 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3428191' 00:11:38.334 killing process with pid 3428191 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3428191 00:11:38.334 11:23:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3428191 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.597 11:23:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.557 11:23:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:40.557 00:11:40.557 real 0m20.698s 00:11:40.557 user 0m41.983s 00:11:40.557 sys 0m8.659s 00:11:40.557 11:23:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.557 11:23:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.557 ************************************ 00:11:40.557 END TEST nvmf_connect_stress 00:11:40.557 ************************************ 00:11:40.557 11:23:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:40.557 11:23:09 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:40.557 11:23:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:40.557 11:23:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.557 11:23:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:40.557 ************************************ 00:11:40.557 START TEST nvmf_fused_ordering 00:11:40.557 ************************************ 00:11:40.557 11:23:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:40.819 * Looking for test storage... 00:11:40.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:40.819 11:23:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:47.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.410 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:47.411 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:47.411 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:47.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.411 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:11:47.673 00:11:47.673 --- 10.0.0.2 ping statistics --- 00:11:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.673 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:11:47.673 00:11:47.673 --- 10.0.0.1 ping statistics --- 00:11:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.673 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.673 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3434570 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3434570 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3434570 ']' 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.934 11:23:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.934 [2024-07-15 11:23:16.433389] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:11:47.934 [2024-07-15 11:23:16.433456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.934 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.934 [2024-07-15 11:23:16.522813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.934 [2024-07-15 11:23:16.617144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.934 [2024-07-15 11:23:16.617202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.934 [2024-07-15 11:23:16.617210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.935 [2024-07-15 11:23:16.617218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.935 [2024-07-15 11:23:16.617224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.935 [2024-07-15 11:23:16.617253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 [2024-07-15 11:23:17.274843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 [2024-07-15 11:23:17.291093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 NULL1 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.880 11:23:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:48.880 [2024-07-15 11:23:17.349893] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:11:48.880 [2024-07-15 11:23:17.349935] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434760 ] 00:11:48.880 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.142 Attached to nqn.2016-06.io.spdk:cnode1 00:11:49.142 Namespace ID: 1 size: 1GB 00:11:49.142 fused_ordering(0) 00:11:49.142 fused_ordering(1) 00:11:49.142 fused_ordering(2) 00:11:49.142 fused_ordering(3) 00:11:49.142 fused_ordering(4) 00:11:49.142 fused_ordering(5) 00:11:49.142 fused_ordering(6) 00:11:49.142 fused_ordering(7) 00:11:49.142 fused_ordering(8) 00:11:49.142 fused_ordering(9) 00:11:49.142 fused_ordering(10) 00:11:49.142 fused_ordering(11) 00:11:49.142 fused_ordering(12) 00:11:49.142 fused_ordering(13) 00:11:49.142 fused_ordering(14) 00:11:49.142 fused_ordering(15) 00:11:49.142 fused_ordering(16) 00:11:49.142 fused_ordering(17) 00:11:49.142 fused_ordering(18) 00:11:49.142 fused_ordering(19) 00:11:49.142 fused_ordering(20) 00:11:49.142 fused_ordering(21) 00:11:49.142 fused_ordering(22) 00:11:49.142 fused_ordering(23) 00:11:49.142 fused_ordering(24) 00:11:49.142 fused_ordering(25) 00:11:49.142 fused_ordering(26) 00:11:49.142 fused_ordering(27) 00:11:49.142 fused_ordering(28) 00:11:49.142 fused_ordering(29) 00:11:49.142 fused_ordering(30) 00:11:49.142 fused_ordering(31) 00:11:49.142 fused_ordering(32) 00:11:49.142 fused_ordering(33) 00:11:49.142 fused_ordering(34) 00:11:49.142 fused_ordering(35) 00:11:49.142 fused_ordering(36) 00:11:49.142 fused_ordering(37) 00:11:49.142 fused_ordering(38) 00:11:49.142 fused_ordering(39) 00:11:49.142 fused_ordering(40) 00:11:49.142 fused_ordering(41) 00:11:49.142 fused_ordering(42) 00:11:49.142 fused_ordering(43) 00:11:49.142 fused_ordering(44) 00:11:49.142 fused_ordering(45) 00:11:49.142 fused_ordering(46) 00:11:49.142 fused_ordering(47) 00:11:49.142 fused_ordering(48) 00:11:49.142 fused_ordering(49) 00:11:49.142 fused_ordering(50) 00:11:49.142 fused_ordering(51) 00:11:49.142 fused_ordering(52) 00:11:49.142 fused_ordering(53) 00:11:49.142 fused_ordering(54) 00:11:49.142 fused_ordering(55) 00:11:49.142 fused_ordering(56) 00:11:49.142 fused_ordering(57) 00:11:49.142 fused_ordering(58) 00:11:49.142 fused_ordering(59) 00:11:49.142 fused_ordering(60) 00:11:49.142 fused_ordering(61) 00:11:49.142 fused_ordering(62) 00:11:49.142 fused_ordering(63) 00:11:49.142 fused_ordering(64) 00:11:49.142 fused_ordering(65) 00:11:49.142 fused_ordering(66) 00:11:49.142 fused_ordering(67) 00:11:49.142 fused_ordering(68) 00:11:49.142 fused_ordering(69) 00:11:49.142 fused_ordering(70) 00:11:49.142 fused_ordering(71) 00:11:49.142 fused_ordering(72) 00:11:49.142 fused_ordering(73) 00:11:49.142 fused_ordering(74) 00:11:49.142 fused_ordering(75) 00:11:49.142 fused_ordering(76) 00:11:49.142 fused_ordering(77) 00:11:49.142 fused_ordering(78) 00:11:49.142 fused_ordering(79) 00:11:49.142 fused_ordering(80) 00:11:49.142 fused_ordering(81) 00:11:49.142 fused_ordering(82) 00:11:49.142 fused_ordering(83) 00:11:49.142 fused_ordering(84) 00:11:49.142 fused_ordering(85) 00:11:49.142 fused_ordering(86) 00:11:49.142 fused_ordering(87) 00:11:49.142 fused_ordering(88) 00:11:49.142 fused_ordering(89) 00:11:49.142 fused_ordering(90) 00:11:49.142 fused_ordering(91) 00:11:49.142 fused_ordering(92) 00:11:49.142 fused_ordering(93) 00:11:49.142 fused_ordering(94) 00:11:49.142 fused_ordering(95) 00:11:49.142 fused_ordering(96) 00:11:49.142 fused_ordering(97) 00:11:49.142 fused_ordering(98) 00:11:49.142 fused_ordering(99) 00:11:49.142 fused_ordering(100) 00:11:49.142 fused_ordering(101) 00:11:49.142 fused_ordering(102) 00:11:49.142 fused_ordering(103) 00:11:49.142 fused_ordering(104) 00:11:49.142 fused_ordering(105) 00:11:49.142 fused_ordering(106) 00:11:49.142 fused_ordering(107) 00:11:49.142 fused_ordering(108) 00:11:49.142 fused_ordering(109) 00:11:49.142 fused_ordering(110) 00:11:49.142 fused_ordering(111) 00:11:49.142 fused_ordering(112) 00:11:49.142 fused_ordering(113) 00:11:49.142 fused_ordering(114) 00:11:49.142 fused_ordering(115) 00:11:49.142 fused_ordering(116) 00:11:49.142 fused_ordering(117) 00:11:49.142 fused_ordering(118) 00:11:49.142 fused_ordering(119) 00:11:49.142 fused_ordering(120) 00:11:49.142 fused_ordering(121) 00:11:49.142 fused_ordering(122) 00:11:49.142 fused_ordering(123) 00:11:49.142 fused_ordering(124) 00:11:49.142 fused_ordering(125) 00:11:49.142 fused_ordering(126) 00:11:49.142 fused_ordering(127) 00:11:49.142 fused_ordering(128) 00:11:49.142 fused_ordering(129) 00:11:49.142 fused_ordering(130) 00:11:49.142 fused_ordering(131) 00:11:49.142 fused_ordering(132) 00:11:49.142 fused_ordering(133) 00:11:49.142 fused_ordering(134) 00:11:49.142 fused_ordering(135) 00:11:49.142 fused_ordering(136) 00:11:49.142 fused_ordering(137) 00:11:49.142 fused_ordering(138) 00:11:49.142 fused_ordering(139) 00:11:49.142 fused_ordering(140) 00:11:49.142 fused_ordering(141) 00:11:49.142 fused_ordering(142) 00:11:49.142 fused_ordering(143) 00:11:49.142 fused_ordering(144) 00:11:49.142 fused_ordering(145) 00:11:49.142 fused_ordering(146) 00:11:49.142 fused_ordering(147) 00:11:49.142 fused_ordering(148) 00:11:49.142 fused_ordering(149) 00:11:49.142 fused_ordering(150) 00:11:49.142 fused_ordering(151) 00:11:49.142 fused_ordering(152) 00:11:49.142 fused_ordering(153) 00:11:49.142 fused_ordering(154) 00:11:49.142 fused_ordering(155) 00:11:49.142 fused_ordering(156) 00:11:49.142 fused_ordering(157) 00:11:49.142 fused_ordering(158) 00:11:49.142 fused_ordering(159) 00:11:49.142 fused_ordering(160) 00:11:49.142 fused_ordering(161) 00:11:49.142 fused_ordering(162) 00:11:49.142 fused_ordering(163) 00:11:49.142 fused_ordering(164) 00:11:49.142 fused_ordering(165) 00:11:49.142 fused_ordering(166) 00:11:49.142 fused_ordering(167) 00:11:49.142 fused_ordering(168) 00:11:49.142 fused_ordering(169) 00:11:49.142 fused_ordering(170) 00:11:49.142 fused_ordering(171) 00:11:49.142 fused_ordering(172) 00:11:49.142 fused_ordering(173) 00:11:49.142 fused_ordering(174) 00:11:49.142 fused_ordering(175) 00:11:49.142 fused_ordering(176) 00:11:49.142 fused_ordering(177) 00:11:49.143 fused_ordering(178) 00:11:49.143 fused_ordering(179) 00:11:49.143 fused_ordering(180) 00:11:49.143 fused_ordering(181) 00:11:49.143 fused_ordering(182) 00:11:49.143 fused_ordering(183) 00:11:49.143 fused_ordering(184) 00:11:49.143 fused_ordering(185) 00:11:49.143 fused_ordering(186) 00:11:49.143 fused_ordering(187) 00:11:49.143 fused_ordering(188) 00:11:49.143 fused_ordering(189) 00:11:49.143 fused_ordering(190) 00:11:49.143 fused_ordering(191) 00:11:49.143 fused_ordering(192) 00:11:49.143 fused_ordering(193) 00:11:49.143 fused_ordering(194) 00:11:49.143 fused_ordering(195) 00:11:49.143 fused_ordering(196) 00:11:49.143 fused_ordering(197) 00:11:49.143 fused_ordering(198) 00:11:49.143 fused_ordering(199) 00:11:49.143 fused_ordering(200) 00:11:49.143 fused_ordering(201) 00:11:49.143 fused_ordering(202) 00:11:49.143 fused_ordering(203) 00:11:49.143 fused_ordering(204) 00:11:49.143 fused_ordering(205) 00:11:49.781 fused_ordering(206) 00:11:49.781 fused_ordering(207) 00:11:49.781 fused_ordering(208) 00:11:49.781 fused_ordering(209) 00:11:49.781 fused_ordering(210) 00:11:49.781 fused_ordering(211) 00:11:49.781 fused_ordering(212) 00:11:49.781 fused_ordering(213) 00:11:49.781 fused_ordering(214) 00:11:49.781 fused_ordering(215) 00:11:49.781 fused_ordering(216) 00:11:49.781 fused_ordering(217) 00:11:49.781 fused_ordering(218) 00:11:49.781 fused_ordering(219) 00:11:49.781 fused_ordering(220) 00:11:49.781 fused_ordering(221) 00:11:49.781 fused_ordering(222) 00:11:49.781 fused_ordering(223) 00:11:49.781 fused_ordering(224) 00:11:49.781 fused_ordering(225) 00:11:49.781 fused_ordering(226) 00:11:49.781 fused_ordering(227) 00:11:49.781 fused_ordering(228) 00:11:49.781 fused_ordering(229) 00:11:49.781 fused_ordering(230) 00:11:49.781 fused_ordering(231) 00:11:49.781 fused_ordering(232) 00:11:49.781 fused_ordering(233) 00:11:49.781 fused_ordering(234) 00:11:49.781 fused_ordering(235) 00:11:49.781 fused_ordering(236) 00:11:49.781 fused_ordering(237) 00:11:49.781 fused_ordering(238) 00:11:49.781 fused_ordering(239) 00:11:49.781 fused_ordering(240) 00:11:49.781 fused_ordering(241) 00:11:49.781 fused_ordering(242) 00:11:49.781 fused_ordering(243) 00:11:49.781 fused_ordering(244) 00:11:49.781 fused_ordering(245) 00:11:49.781 fused_ordering(246) 00:11:49.781 fused_ordering(247) 00:11:49.781 fused_ordering(248) 00:11:49.781 fused_ordering(249) 00:11:49.781 fused_ordering(250) 00:11:49.781 fused_ordering(251) 00:11:49.781 fused_ordering(252) 00:11:49.781 fused_ordering(253) 00:11:49.781 fused_ordering(254) 00:11:49.781 fused_ordering(255) 00:11:49.781 fused_ordering(256) 00:11:49.781 fused_ordering(257) 00:11:49.781 fused_ordering(258) 00:11:49.781 fused_ordering(259) 00:11:49.781 fused_ordering(260) 00:11:49.781 fused_ordering(261) 00:11:49.781 fused_ordering(262) 00:11:49.781 fused_ordering(263) 00:11:49.781 fused_ordering(264) 00:11:49.781 fused_ordering(265) 00:11:49.781 fused_ordering(266) 00:11:49.781 fused_ordering(267) 00:11:49.781 fused_ordering(268) 00:11:49.781 fused_ordering(269) 00:11:49.781 fused_ordering(270) 00:11:49.781 fused_ordering(271) 00:11:49.781 fused_ordering(272) 00:11:49.781 fused_ordering(273) 00:11:49.781 fused_ordering(274) 00:11:49.781 fused_ordering(275) 00:11:49.781 fused_ordering(276) 00:11:49.781 fused_ordering(277) 00:11:49.781 fused_ordering(278) 00:11:49.781 fused_ordering(279) 00:11:49.781 fused_ordering(280) 00:11:49.781 fused_ordering(281) 00:11:49.781 fused_ordering(282) 00:11:49.781 fused_ordering(283) 00:11:49.781 fused_ordering(284) 00:11:49.781 fused_ordering(285) 00:11:49.781 fused_ordering(286) 00:11:49.781 fused_ordering(287) 00:11:49.781 fused_ordering(288) 00:11:49.781 fused_ordering(289) 00:11:49.781 fused_ordering(290) 00:11:49.781 fused_ordering(291) 00:11:49.781 fused_ordering(292) 00:11:49.781 fused_ordering(293) 00:11:49.781 fused_ordering(294) 00:11:49.781 fused_ordering(295) 00:11:49.781 fused_ordering(296) 00:11:49.781 fused_ordering(297) 00:11:49.781 fused_ordering(298) 00:11:49.781 fused_ordering(299) 00:11:49.781 fused_ordering(300) 00:11:49.781 fused_ordering(301) 00:11:49.781 fused_ordering(302) 00:11:49.781 fused_ordering(303) 00:11:49.781 fused_ordering(304) 00:11:49.781 fused_ordering(305) 00:11:49.781 fused_ordering(306) 00:11:49.781 fused_ordering(307) 00:11:49.781 fused_ordering(308) 00:11:49.781 fused_ordering(309) 00:11:49.781 fused_ordering(310) 00:11:49.781 fused_ordering(311) 00:11:49.781 fused_ordering(312) 00:11:49.781 fused_ordering(313) 00:11:49.781 fused_ordering(314) 00:11:49.781 fused_ordering(315) 00:11:49.781 fused_ordering(316) 00:11:49.781 fused_ordering(317) 00:11:49.781 fused_ordering(318) 00:11:49.781 fused_ordering(319) 00:11:49.781 fused_ordering(320) 00:11:49.781 fused_ordering(321) 00:11:49.781 fused_ordering(322) 00:11:49.781 fused_ordering(323) 00:11:49.781 fused_ordering(324) 00:11:49.781 fused_ordering(325) 00:11:49.781 fused_ordering(326) 00:11:49.781 fused_ordering(327) 00:11:49.781 fused_ordering(328) 00:11:49.781 fused_ordering(329) 00:11:49.781 fused_ordering(330) 00:11:49.781 fused_ordering(331) 00:11:49.781 fused_ordering(332) 00:11:49.781 fused_ordering(333) 00:11:49.781 fused_ordering(334) 00:11:49.781 fused_ordering(335) 00:11:49.781 fused_ordering(336) 00:11:49.781 fused_ordering(337) 00:11:49.781 fused_ordering(338) 00:11:49.781 fused_ordering(339) 00:11:49.781 fused_ordering(340) 00:11:49.781 fused_ordering(341) 00:11:49.781 fused_ordering(342) 00:11:49.781 fused_ordering(343) 00:11:49.781 fused_ordering(344) 00:11:49.781 fused_ordering(345) 00:11:49.781 fused_ordering(346) 00:11:49.781 fused_ordering(347) 00:11:49.781 fused_ordering(348) 00:11:49.781 fused_ordering(349) 00:11:49.781 fused_ordering(350) 00:11:49.781 fused_ordering(351) 00:11:49.781 fused_ordering(352) 00:11:49.781 fused_ordering(353) 00:11:49.781 fused_ordering(354) 00:11:49.781 fused_ordering(355) 00:11:49.781 fused_ordering(356) 00:11:49.781 fused_ordering(357) 00:11:49.781 fused_ordering(358) 00:11:49.781 fused_ordering(359) 00:11:49.781 fused_ordering(360) 00:11:49.781 fused_ordering(361) 00:11:49.781 fused_ordering(362) 00:11:49.781 fused_ordering(363) 00:11:49.781 fused_ordering(364) 00:11:49.781 fused_ordering(365) 00:11:49.781 fused_ordering(366) 00:11:49.781 fused_ordering(367) 00:11:49.781 fused_ordering(368) 00:11:49.781 fused_ordering(369) 00:11:49.781 fused_ordering(370) 00:11:49.781 fused_ordering(371) 00:11:49.781 fused_ordering(372) 00:11:49.781 fused_ordering(373) 00:11:49.781 fused_ordering(374) 00:11:49.781 fused_ordering(375) 00:11:49.781 fused_ordering(376) 00:11:49.781 fused_ordering(377) 00:11:49.781 fused_ordering(378) 00:11:49.781 fused_ordering(379) 00:11:49.781 fused_ordering(380) 00:11:49.781 fused_ordering(381) 00:11:49.781 fused_ordering(382) 00:11:49.781 fused_ordering(383) 00:11:49.781 fused_ordering(384) 00:11:49.781 fused_ordering(385) 00:11:49.781 fused_ordering(386) 00:11:49.781 fused_ordering(387) 00:11:49.781 fused_ordering(388) 00:11:49.781 fused_ordering(389) 00:11:49.781 fused_ordering(390) 00:11:49.781 fused_ordering(391) 00:11:49.781 fused_ordering(392) 00:11:49.781 fused_ordering(393) 00:11:49.781 fused_ordering(394) 00:11:49.781 fused_ordering(395) 00:11:49.781 fused_ordering(396) 00:11:49.781 fused_ordering(397) 00:11:49.781 fused_ordering(398) 00:11:49.781 fused_ordering(399) 00:11:49.781 fused_ordering(400) 00:11:49.781 fused_ordering(401) 00:11:49.781 fused_ordering(402) 00:11:49.781 fused_ordering(403) 00:11:49.781 fused_ordering(404) 00:11:49.781 fused_ordering(405) 00:11:49.781 fused_ordering(406) 00:11:49.781 fused_ordering(407) 00:11:49.781 fused_ordering(408) 00:11:49.781 fused_ordering(409) 00:11:49.781 fused_ordering(410) 00:11:50.042 fused_ordering(411) 00:11:50.042 fused_ordering(412) 00:11:50.042 fused_ordering(413) 00:11:50.042 fused_ordering(414) 00:11:50.042 fused_ordering(415) 00:11:50.042 fused_ordering(416) 00:11:50.042 fused_ordering(417) 00:11:50.042 fused_ordering(418) 00:11:50.042 fused_ordering(419) 00:11:50.042 fused_ordering(420) 00:11:50.042 fused_ordering(421) 00:11:50.042 fused_ordering(422) 00:11:50.042 fused_ordering(423) 00:11:50.042 fused_ordering(424) 00:11:50.042 fused_ordering(425) 00:11:50.042 fused_ordering(426) 00:11:50.042 fused_ordering(427) 00:11:50.042 fused_ordering(428) 00:11:50.042 fused_ordering(429) 00:11:50.042 fused_ordering(430) 00:11:50.042 fused_ordering(431) 00:11:50.042 fused_ordering(432) 00:11:50.042 fused_ordering(433) 00:11:50.042 fused_ordering(434) 00:11:50.042 fused_ordering(435) 00:11:50.042 fused_ordering(436) 00:11:50.042 fused_ordering(437) 00:11:50.043 fused_ordering(438) 00:11:50.043 fused_ordering(439) 00:11:50.043 fused_ordering(440) 00:11:50.043 fused_ordering(441) 00:11:50.043 fused_ordering(442) 00:11:50.043 fused_ordering(443) 00:11:50.043 fused_ordering(444) 00:11:50.043 fused_ordering(445) 00:11:50.043 fused_ordering(446) 00:11:50.043 fused_ordering(447) 00:11:50.043 fused_ordering(448) 00:11:50.043 fused_ordering(449) 00:11:50.043 fused_ordering(450) 00:11:50.043 fused_ordering(451) 00:11:50.043 fused_ordering(452) 00:11:50.043 fused_ordering(453) 00:11:50.043 fused_ordering(454) 00:11:50.043 fused_ordering(455) 00:11:50.043 fused_ordering(456) 00:11:50.043 fused_ordering(457) 00:11:50.043 fused_ordering(458) 00:11:50.043 fused_ordering(459) 00:11:50.043 fused_ordering(460) 00:11:50.043 fused_ordering(461) 00:11:50.043 fused_ordering(462) 00:11:50.043 fused_ordering(463) 00:11:50.043 fused_ordering(464) 00:11:50.043 fused_ordering(465) 00:11:50.043 fused_ordering(466) 00:11:50.043 fused_ordering(467) 00:11:50.043 fused_ordering(468) 00:11:50.043 fused_ordering(469) 00:11:50.043 fused_ordering(470) 00:11:50.043 fused_ordering(471) 00:11:50.043 fused_ordering(472) 00:11:50.043 fused_ordering(473) 00:11:50.043 fused_ordering(474) 00:11:50.043 fused_ordering(475) 00:11:50.043 fused_ordering(476) 00:11:50.043 fused_ordering(477) 00:11:50.043 fused_ordering(478) 00:11:50.043 fused_ordering(479) 00:11:50.043 fused_ordering(480) 00:11:50.043 fused_ordering(481) 00:11:50.043 fused_ordering(482) 00:11:50.043 fused_ordering(483) 00:11:50.043 fused_ordering(484) 00:11:50.043 fused_ordering(485) 00:11:50.043 fused_ordering(486) 00:11:50.043 fused_ordering(487) 00:11:50.043 fused_ordering(488) 00:11:50.043 fused_ordering(489) 00:11:50.043 fused_ordering(490) 00:11:50.043 fused_ordering(491) 00:11:50.043 fused_ordering(492) 00:11:50.043 fused_ordering(493) 00:11:50.043 fused_ordering(494) 00:11:50.043 fused_ordering(495) 00:11:50.043 fused_ordering(496) 00:11:50.043 fused_ordering(497) 00:11:50.043 fused_ordering(498) 00:11:50.043 fused_ordering(499) 00:11:50.043 fused_ordering(500) 00:11:50.043 fused_ordering(501) 00:11:50.043 fused_ordering(502) 00:11:50.043 fused_ordering(503) 00:11:50.043 fused_ordering(504) 00:11:50.043 fused_ordering(505) 00:11:50.043 fused_ordering(506) 00:11:50.043 fused_ordering(507) 00:11:50.043 fused_ordering(508) 00:11:50.043 fused_ordering(509) 00:11:50.043 fused_ordering(510) 00:11:50.043 fused_ordering(511) 00:11:50.043 fused_ordering(512) 00:11:50.043 fused_ordering(513) 00:11:50.043 fused_ordering(514) 00:11:50.043 fused_ordering(515) 00:11:50.043 fused_ordering(516) 00:11:50.043 fused_ordering(517) 00:11:50.043 fused_ordering(518) 00:11:50.043 fused_ordering(519) 00:11:50.043 fused_ordering(520) 00:11:50.043 fused_ordering(521) 00:11:50.043 fused_ordering(522) 00:11:50.043 fused_ordering(523) 00:11:50.043 fused_ordering(524) 00:11:50.043 fused_ordering(525) 00:11:50.043 fused_ordering(526) 00:11:50.043 fused_ordering(527) 00:11:50.043 fused_ordering(528) 00:11:50.043 fused_ordering(529) 00:11:50.043 fused_ordering(530) 00:11:50.043 fused_ordering(531) 00:11:50.043 fused_ordering(532) 00:11:50.043 fused_ordering(533) 00:11:50.043 fused_ordering(534) 00:11:50.043 fused_ordering(535) 00:11:50.043 fused_ordering(536) 00:11:50.043 fused_ordering(537) 00:11:50.043 fused_ordering(538) 00:11:50.043 fused_ordering(539) 00:11:50.043 fused_ordering(540) 00:11:50.043 fused_ordering(541) 00:11:50.043 fused_ordering(542) 00:11:50.043 fused_ordering(543) 00:11:50.043 fused_ordering(544) 00:11:50.043 fused_ordering(545) 00:11:50.043 fused_ordering(546) 00:11:50.043 fused_ordering(547) 00:11:50.043 fused_ordering(548) 00:11:50.043 fused_ordering(549) 00:11:50.043 fused_ordering(550) 00:11:50.043 fused_ordering(551) 00:11:50.043 fused_ordering(552) 00:11:50.043 fused_ordering(553) 00:11:50.043 fused_ordering(554) 00:11:50.043 fused_ordering(555) 00:11:50.043 fused_ordering(556) 00:11:50.043 fused_ordering(557) 00:11:50.043 fused_ordering(558) 00:11:50.043 fused_ordering(559) 00:11:50.043 fused_ordering(560) 00:11:50.043 fused_ordering(561) 00:11:50.043 fused_ordering(562) 00:11:50.043 fused_ordering(563) 00:11:50.043 fused_ordering(564) 00:11:50.043 fused_ordering(565) 00:11:50.043 fused_ordering(566) 00:11:50.043 fused_ordering(567) 00:11:50.043 fused_ordering(568) 00:11:50.043 fused_ordering(569) 00:11:50.043 fused_ordering(570) 00:11:50.043 fused_ordering(571) 00:11:50.043 fused_ordering(572) 00:11:50.043 fused_ordering(573) 00:11:50.043 fused_ordering(574) 00:11:50.043 fused_ordering(575) 00:11:50.043 fused_ordering(576) 00:11:50.043 fused_ordering(577) 00:11:50.043 fused_ordering(578) 00:11:50.043 fused_ordering(579) 00:11:50.043 fused_ordering(580) 00:11:50.043 fused_ordering(581) 00:11:50.043 fused_ordering(582) 00:11:50.043 fused_ordering(583) 00:11:50.043 fused_ordering(584) 00:11:50.043 fused_ordering(585) 00:11:50.043 fused_ordering(586) 00:11:50.043 fused_ordering(587) 00:11:50.043 fused_ordering(588) 00:11:50.043 fused_ordering(589) 00:11:50.043 fused_ordering(590) 00:11:50.043 fused_ordering(591) 00:11:50.043 fused_ordering(592) 00:11:50.043 fused_ordering(593) 00:11:50.043 fused_ordering(594) 00:11:50.043 fused_ordering(595) 00:11:50.043 fused_ordering(596) 00:11:50.043 fused_ordering(597) 00:11:50.043 fused_ordering(598) 00:11:50.043 fused_ordering(599) 00:11:50.043 fused_ordering(600) 00:11:50.043 fused_ordering(601) 00:11:50.043 fused_ordering(602) 00:11:50.043 fused_ordering(603) 00:11:50.043 fused_ordering(604) 00:11:50.043 fused_ordering(605) 00:11:50.043 fused_ordering(606) 00:11:50.043 fused_ordering(607) 00:11:50.043 fused_ordering(608) 00:11:50.043 fused_ordering(609) 00:11:50.043 fused_ordering(610) 00:11:50.043 fused_ordering(611) 00:11:50.043 fused_ordering(612) 00:11:50.043 fused_ordering(613) 00:11:50.043 fused_ordering(614) 00:11:50.043 fused_ordering(615) 00:11:50.987 fused_ordering(616) 00:11:50.987 fused_ordering(617) 00:11:50.987 fused_ordering(618) 00:11:50.987 fused_ordering(619) 00:11:50.987 fused_ordering(620) 00:11:50.987 fused_ordering(621) 00:11:50.987 fused_ordering(622) 00:11:50.987 fused_ordering(623) 00:11:50.987 fused_ordering(624) 00:11:50.987 fused_ordering(625) 00:11:50.987 fused_ordering(626) 00:11:50.987 fused_ordering(627) 00:11:50.987 fused_ordering(628) 00:11:50.987 fused_ordering(629) 00:11:50.987 fused_ordering(630) 00:11:50.987 fused_ordering(631) 00:11:50.987 fused_ordering(632) 00:11:50.987 fused_ordering(633) 00:11:50.987 fused_ordering(634) 00:11:50.987 fused_ordering(635) 00:11:50.987 fused_ordering(636) 00:11:50.987 fused_ordering(637) 00:11:50.987 fused_ordering(638) 00:11:50.987 fused_ordering(639) 00:11:50.987 fused_ordering(640) 00:11:50.987 fused_ordering(641) 00:11:50.987 fused_ordering(642) 00:11:50.987 fused_ordering(643) 00:11:50.987 fused_ordering(644) 00:11:50.987 fused_ordering(645) 00:11:50.987 fused_ordering(646) 00:11:50.987 fused_ordering(647) 00:11:50.987 fused_ordering(648) 00:11:50.987 fused_ordering(649) 00:11:50.987 fused_ordering(650) 00:11:50.987 fused_ordering(651) 00:11:50.987 fused_ordering(652) 00:11:50.987 fused_ordering(653) 00:11:50.987 fused_ordering(654) 00:11:50.987 fused_ordering(655) 00:11:50.987 fused_ordering(656) 00:11:50.987 fused_ordering(657) 00:11:50.987 fused_ordering(658) 00:11:50.987 fused_ordering(659) 00:11:50.987 fused_ordering(660) 00:11:50.987 fused_ordering(661) 00:11:50.987 fused_ordering(662) 00:11:50.987 fused_ordering(663) 00:11:50.987 fused_ordering(664) 00:11:50.987 fused_ordering(665) 00:11:50.987 fused_ordering(666) 00:11:50.987 fused_ordering(667) 00:11:50.987 fused_ordering(668) 00:11:50.987 fused_ordering(669) 00:11:50.987 fused_ordering(670) 00:11:50.987 fused_ordering(671) 00:11:50.987 fused_ordering(672) 00:11:50.987 fused_ordering(673) 00:11:50.987 fused_ordering(674) 00:11:50.987 fused_ordering(675) 00:11:50.987 fused_ordering(676) 00:11:50.987 fused_ordering(677) 00:11:50.987 fused_ordering(678) 00:11:50.987 fused_ordering(679) 00:11:50.987 fused_ordering(680) 00:11:50.987 fused_ordering(681) 00:11:50.987 fused_ordering(682) 00:11:50.987 fused_ordering(683) 00:11:50.987 fused_ordering(684) 00:11:50.987 fused_ordering(685) 00:11:50.987 fused_ordering(686) 00:11:50.987 fused_ordering(687) 00:11:50.987 fused_ordering(688) 00:11:50.987 fused_ordering(689) 00:11:50.987 fused_ordering(690) 00:11:50.987 fused_ordering(691) 00:11:50.987 fused_ordering(692) 00:11:50.987 fused_ordering(693) 00:11:50.987 fused_ordering(694) 00:11:50.987 fused_ordering(695) 00:11:50.987 fused_ordering(696) 00:11:50.987 fused_ordering(697) 00:11:50.987 fused_ordering(698) 00:11:50.987 fused_ordering(699) 00:11:50.987 fused_ordering(700) 00:11:50.987 fused_ordering(701) 00:11:50.987 fused_ordering(702) 00:11:50.987 fused_ordering(703) 00:11:50.987 fused_ordering(704) 00:11:50.987 fused_ordering(705) 00:11:50.987 fused_ordering(706) 00:11:50.987 fused_ordering(707) 00:11:50.987 fused_ordering(708) 00:11:50.987 fused_ordering(709) 00:11:50.987 fused_ordering(710) 00:11:50.987 fused_ordering(711) 00:11:50.987 fused_ordering(712) 00:11:50.987 fused_ordering(713) 00:11:50.987 fused_ordering(714) 00:11:50.987 fused_ordering(715) 00:11:50.987 fused_ordering(716) 00:11:50.987 fused_ordering(717) 00:11:50.987 fused_ordering(718) 00:11:50.987 fused_ordering(719) 00:11:50.987 fused_ordering(720) 00:11:50.987 fused_ordering(721) 00:11:50.987 fused_ordering(722) 00:11:50.987 fused_ordering(723) 00:11:50.987 fused_ordering(724) 00:11:50.987 fused_ordering(725) 00:11:50.987 fused_ordering(726) 00:11:50.987 fused_ordering(727) 00:11:50.987 fused_ordering(728) 00:11:50.987 fused_ordering(729) 00:11:50.987 fused_ordering(730) 00:11:50.987 fused_ordering(731) 00:11:50.987 fused_ordering(732) 00:11:50.987 fused_ordering(733) 00:11:50.987 fused_ordering(734) 00:11:50.987 fused_ordering(735) 00:11:50.987 fused_ordering(736) 00:11:50.987 fused_ordering(737) 00:11:50.987 fused_ordering(738) 00:11:50.987 fused_ordering(739) 00:11:50.987 fused_ordering(740) 00:11:50.987 fused_ordering(741) 00:11:50.987 fused_ordering(742) 00:11:50.987 fused_ordering(743) 00:11:50.987 fused_ordering(744) 00:11:50.987 fused_ordering(745) 00:11:50.987 fused_ordering(746) 00:11:50.987 fused_ordering(747) 00:11:50.987 fused_ordering(748) 00:11:50.987 fused_ordering(749) 00:11:50.987 fused_ordering(750) 00:11:50.987 fused_ordering(751) 00:11:50.987 fused_ordering(752) 00:11:50.987 fused_ordering(753) 00:11:50.987 fused_ordering(754) 00:11:50.987 fused_ordering(755) 00:11:50.987 fused_ordering(756) 00:11:50.987 fused_ordering(757) 00:11:50.987 fused_ordering(758) 00:11:50.987 fused_ordering(759) 00:11:50.987 fused_ordering(760) 00:11:50.987 fused_ordering(761) 00:11:50.987 fused_ordering(762) 00:11:50.987 fused_ordering(763) 00:11:50.987 fused_ordering(764) 00:11:50.987 fused_ordering(765) 00:11:50.987 fused_ordering(766) 00:11:50.987 fused_ordering(767) 00:11:50.987 fused_ordering(768) 00:11:50.987 fused_ordering(769) 00:11:50.987 fused_ordering(770) 00:11:50.987 fused_ordering(771) 00:11:50.987 fused_ordering(772) 00:11:50.987 fused_ordering(773) 00:11:50.987 fused_ordering(774) 00:11:50.987 fused_ordering(775) 00:11:50.987 fused_ordering(776) 00:11:50.987 fused_ordering(777) 00:11:50.987 fused_ordering(778) 00:11:50.987 fused_ordering(779) 00:11:50.987 fused_ordering(780) 00:11:50.987 fused_ordering(781) 00:11:50.987 fused_ordering(782) 00:11:50.987 fused_ordering(783) 00:11:50.987 fused_ordering(784) 00:11:50.987 fused_ordering(785) 00:11:50.987 fused_ordering(786) 00:11:50.987 fused_ordering(787) 00:11:50.987 fused_ordering(788) 00:11:50.987 fused_ordering(789) 00:11:50.987 fused_ordering(790) 00:11:50.987 fused_ordering(791) 00:11:50.987 fused_ordering(792) 00:11:50.987 fused_ordering(793) 00:11:50.987 fused_ordering(794) 00:11:50.987 fused_ordering(795) 00:11:50.987 fused_ordering(796) 00:11:50.987 fused_ordering(797) 00:11:50.987 fused_ordering(798) 00:11:50.987 fused_ordering(799) 00:11:50.987 fused_ordering(800) 00:11:50.987 fused_ordering(801) 00:11:50.987 fused_ordering(802) 00:11:50.987 fused_ordering(803) 00:11:50.987 fused_ordering(804) 00:11:50.987 fused_ordering(805) 00:11:50.987 fused_ordering(806) 00:11:50.987 fused_ordering(807) 00:11:50.987 fused_ordering(808) 00:11:50.987 fused_ordering(809) 00:11:50.987 fused_ordering(810) 00:11:50.987 fused_ordering(811) 00:11:50.987 fused_ordering(812) 00:11:50.987 fused_ordering(813) 00:11:50.987 fused_ordering(814) 00:11:50.987 fused_ordering(815) 00:11:50.987 fused_ordering(816) 00:11:50.987 fused_ordering(817) 00:11:50.987 fused_ordering(818) 00:11:50.987 fused_ordering(819) 00:11:50.987 fused_ordering(820) 00:11:51.559 fused_ordering(821) 00:11:51.559 fused_ordering(822) 00:11:51.559 fused_ordering(823) 00:11:51.559 fused_ordering(824) 00:11:51.559 fused_ordering(825) 00:11:51.559 fused_ordering(826) 00:11:51.559 fused_ordering(827) 00:11:51.559 fused_ordering(828) 00:11:51.559 fused_ordering(829) 00:11:51.559 fused_ordering(830) 00:11:51.559 fused_ordering(831) 00:11:51.559 fused_ordering(832) 00:11:51.559 fused_ordering(833) 00:11:51.559 fused_ordering(834) 00:11:51.559 fused_ordering(835) 00:11:51.559 fused_ordering(836) 00:11:51.559 fused_ordering(837) 00:11:51.559 fused_ordering(838) 00:11:51.559 fused_ordering(839) 00:11:51.559 fused_ordering(840) 00:11:51.559 fused_ordering(841) 00:11:51.559 fused_ordering(842) 00:11:51.559 fused_ordering(843) 00:11:51.559 fused_ordering(844) 00:11:51.559 fused_ordering(845) 00:11:51.559 fused_ordering(846) 00:11:51.559 fused_ordering(847) 00:11:51.559 fused_ordering(848) 00:11:51.559 fused_ordering(849) 00:11:51.559 fused_ordering(850) 00:11:51.559 fused_ordering(851) 00:11:51.559 fused_ordering(852) 00:11:51.559 fused_ordering(853) 00:11:51.559 fused_ordering(854) 00:11:51.559 fused_ordering(855) 00:11:51.559 fused_ordering(856) 00:11:51.559 fused_ordering(857) 00:11:51.559 fused_ordering(858) 00:11:51.559 fused_ordering(859) 00:11:51.559 fused_ordering(860) 00:11:51.559 fused_ordering(861) 00:11:51.559 fused_ordering(862) 00:11:51.559 fused_ordering(863) 00:11:51.559 fused_ordering(864) 00:11:51.559 fused_ordering(865) 00:11:51.559 fused_ordering(866) 00:11:51.559 fused_ordering(867) 00:11:51.559 fused_ordering(868) 00:11:51.559 fused_ordering(869) 00:11:51.560 fused_ordering(870) 00:11:51.560 fused_ordering(871) 00:11:51.560 fused_ordering(872) 00:11:51.560 fused_ordering(873) 00:11:51.560 fused_ordering(874) 00:11:51.560 fused_ordering(875) 00:11:51.560 fused_ordering(876) 00:11:51.560 fused_ordering(877) 00:11:51.560 fused_ordering(878) 00:11:51.560 fused_ordering(879) 00:11:51.560 fused_ordering(880) 00:11:51.560 fused_ordering(881) 00:11:51.560 fused_ordering(882) 00:11:51.560 fused_ordering(883) 00:11:51.560 fused_ordering(884) 00:11:51.560 fused_ordering(885) 00:11:51.560 fused_ordering(886) 00:11:51.560 fused_ordering(887) 00:11:51.560 fused_ordering(888) 00:11:51.560 fused_ordering(889) 00:11:51.560 fused_ordering(890) 00:11:51.560 fused_ordering(891) 00:11:51.560 fused_ordering(892) 00:11:51.560 fused_ordering(893) 00:11:51.560 fused_ordering(894) 00:11:51.560 fused_ordering(895) 00:11:51.560 fused_ordering(896) 00:11:51.560 fused_ordering(897) 00:11:51.560 fused_ordering(898) 00:11:51.560 fused_ordering(899) 00:11:51.560 fused_ordering(900) 00:11:51.560 fused_ordering(901) 00:11:51.560 fused_ordering(902) 00:11:51.560 fused_ordering(903) 00:11:51.560 fused_ordering(904) 00:11:51.560 fused_ordering(905) 00:11:51.560 fused_ordering(906) 00:11:51.560 fused_ordering(907) 00:11:51.560 fused_ordering(908) 00:11:51.560 fused_ordering(909) 00:11:51.560 fused_ordering(910) 00:11:51.560 fused_ordering(911) 00:11:51.560 fused_ordering(912) 00:11:51.560 fused_ordering(913) 00:11:51.560 fused_ordering(914) 00:11:51.560 fused_ordering(915) 00:11:51.560 fused_ordering(916) 00:11:51.560 fused_ordering(917) 00:11:51.560 fused_ordering(918) 00:11:51.560 fused_ordering(919) 00:11:51.560 fused_ordering(920) 00:11:51.560 fused_ordering(921) 00:11:51.560 fused_ordering(922) 00:11:51.560 fused_ordering(923) 00:11:51.560 fused_ordering(924) 00:11:51.560 fused_ordering(925) 00:11:51.560 fused_ordering(926) 00:11:51.560 fused_ordering(927) 00:11:51.560 fused_ordering(928) 00:11:51.560 fused_ordering(929) 00:11:51.560 fused_ordering(930) 00:11:51.560 fused_ordering(931) 00:11:51.560 fused_ordering(932) 00:11:51.560 fused_ordering(933) 00:11:51.560 fused_ordering(934) 00:11:51.560 fused_ordering(935) 00:11:51.560 fused_ordering(936) 00:11:51.560 fused_ordering(937) 00:11:51.560 fused_ordering(938) 00:11:51.560 fused_ordering(939) 00:11:51.560 fused_ordering(940) 00:11:51.560 fused_ordering(941) 00:11:51.560 fused_ordering(942) 00:11:51.560 fused_ordering(943) 00:11:51.560 fused_ordering(944) 00:11:51.560 fused_ordering(945) 00:11:51.560 fused_ordering(946) 00:11:51.560 fused_ordering(947) 00:11:51.560 fused_ordering(948) 00:11:51.560 fused_ordering(949) 00:11:51.560 fused_ordering(950) 00:11:51.560 fused_ordering(951) 00:11:51.560 fused_ordering(952) 00:11:51.560 fused_ordering(953) 00:11:51.560 fused_ordering(954) 00:11:51.560 fused_ordering(955) 00:11:51.560 fused_ordering(956) 00:11:51.560 fused_ordering(957) 00:11:51.560 fused_ordering(958) 00:11:51.560 fused_ordering(959) 00:11:51.560 fused_ordering(960) 00:11:51.560 fused_ordering(961) 00:11:51.560 fused_ordering(962) 00:11:51.560 fused_ordering(963) 00:11:51.560 fused_ordering(964) 00:11:51.560 fused_ordering(965) 00:11:51.560 fused_ordering(966) 00:11:51.560 fused_ordering(967) 00:11:51.560 fused_ordering(968) 00:11:51.560 fused_ordering(969) 00:11:51.560 fused_ordering(970) 00:11:51.560 fused_ordering(971) 00:11:51.560 fused_ordering(972) 00:11:51.560 fused_ordering(973) 00:11:51.560 fused_ordering(974) 00:11:51.560 fused_ordering(975) 00:11:51.560 fused_ordering(976) 00:11:51.560 fused_ordering(977) 00:11:51.560 fused_ordering(978) 00:11:51.560 fused_ordering(979) 00:11:51.560 fused_ordering(980) 00:11:51.560 fused_ordering(981) 00:11:51.560 fused_ordering(982) 00:11:51.560 fused_ordering(983) 00:11:51.560 fused_ordering(984) 00:11:51.560 fused_ordering(985) 00:11:51.560 fused_ordering(986) 00:11:51.560 fused_ordering(987) 00:11:51.560 fused_ordering(988) 00:11:51.560 fused_ordering(989) 00:11:51.560 fused_ordering(990) 00:11:51.560 fused_ordering(991) 00:11:51.560 fused_ordering(992) 00:11:51.560 fused_ordering(993) 00:11:51.560 fused_ordering(994) 00:11:51.560 fused_ordering(995) 00:11:51.560 fused_ordering(996) 00:11:51.560 fused_ordering(997) 00:11:51.560 fused_ordering(998) 00:11:51.560 fused_ordering(999) 00:11:51.560 fused_ordering(1000) 00:11:51.560 fused_ordering(1001) 00:11:51.560 fused_ordering(1002) 00:11:51.560 fused_ordering(1003) 00:11:51.560 fused_ordering(1004) 00:11:51.560 fused_ordering(1005) 00:11:51.560 fused_ordering(1006) 00:11:51.560 fused_ordering(1007) 00:11:51.560 fused_ordering(1008) 00:11:51.560 fused_ordering(1009) 00:11:51.560 fused_ordering(1010) 00:11:51.560 fused_ordering(1011) 00:11:51.560 fused_ordering(1012) 00:11:51.560 fused_ordering(1013) 00:11:51.560 fused_ordering(1014) 00:11:51.560 fused_ordering(1015) 00:11:51.560 fused_ordering(1016) 00:11:51.560 fused_ordering(1017) 00:11:51.560 fused_ordering(1018) 00:11:51.560 fused_ordering(1019) 00:11:51.560 fused_ordering(1020) 00:11:51.560 fused_ordering(1021) 00:11:51.560 fused_ordering(1022) 00:11:51.560 fused_ordering(1023) 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:51.560 rmmod nvme_tcp 00:11:51.560 rmmod nvme_fabrics 00:11:51.560 rmmod nvme_keyring 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3434570 ']' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3434570 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3434570 ']' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3434570 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3434570 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3434570' 00:11:51.560 killing process with pid 3434570 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3434570 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3434570 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.560 11:23:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.108 11:23:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:54.108 00:11:54.108 real 0m13.120s 00:11:54.108 user 0m7.092s 00:11:54.108 sys 0m7.065s 00:11:54.108 11:23:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.108 11:23:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:54.108 ************************************ 00:11:54.108 END TEST nvmf_fused_ordering 00:11:54.108 ************************************ 00:11:54.108 11:23:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:54.108 11:23:22 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:54.108 11:23:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:54.108 11:23:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.108 11:23:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:54.108 ************************************ 00:11:54.108 START TEST nvmf_delete_subsystem 00:11:54.108 ************************************ 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:54.108 * Looking for test storage... 00:11:54.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.108 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.109 11:23:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:00.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:00.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:00.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:00.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.702 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:12:00.963 00:12:00.963 --- 10.0.0.2 ping statistics --- 00:12:00.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.963 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:12:00.963 00:12:00.963 --- 10.0.0.1 ping statistics --- 00:12:00.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.963 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3439403 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3439403 00:12:00.963 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:00.964 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3439403 ']' 00:12:00.964 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.964 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.964 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.964 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.964 11:23:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.964 [2024-07-15 11:23:29.525076] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:12:00.964 [2024-07-15 11:23:29.525154] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.964 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.964 [2024-07-15 11:23:29.596333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:01.225 [2024-07-15 11:23:29.670789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.225 [2024-07-15 11:23:29.670827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.225 [2024-07-15 11:23:29.670835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.225 [2024-07-15 11:23:29.670841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.225 [2024-07-15 11:23:29.670847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.225 [2024-07-15 11:23:29.670989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.225 [2024-07-15 11:23:29.670991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 [2024-07-15 11:23:30.358530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 [2024-07-15 11:23:30.374652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 NULL1 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 Delay0 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3439612 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:01.795 11:23:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:01.795 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.795 [2024-07-15 11:23:30.459245] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:04.335 11:23:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.335 11:23:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.335 11:23:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 [2024-07-15 11:23:32.584245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa6000 is same with the state(5) to be set 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.335 Write completed with error (sct=0, sc=8) 00:12:04.335 starting I/O failed: -6 00:12:04.335 Read completed with error (sct=0, sc=8) 00:12:04.336 [2024-07-15 11:23:32.587715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce9800d430 is same with the state(5) to be set 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Read completed with error (sct=0, sc=8) 00:12:04.336 Write completed with error (sct=0, sc=8) 00:12:04.905 [2024-07-15 11:23:33.559836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa7ac0 is same with the state(5) to be set 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 [2024-07-15 11:23:33.587801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa63e0 is same with the state(5) to be set 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 [2024-07-15 11:23:33.588074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa67a0 is same with the state(5) to be set 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 [2024-07-15 11:23:33.590278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce9800d740 is same with the state(5) to be set 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Write completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 Read completed with error (sct=0, sc=8) 00:12:04.905 [2024-07-15 11:23:33.590347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fce9800cfe0 is same with the state(5) to be set 00:12:04.905 Initializing NVMe Controllers 00:12:04.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.905 Controller IO queue size 128, less than required. 00:12:04.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:04.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:04.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:04.905 Initialization complete. Launching workers. 00:12:04.905 ======================================================== 00:12:04.905 Latency(us) 00:12:04.905 Device Information : IOPS MiB/s Average min max 00:12:04.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.79 0.09 883489.12 243.83 1006505.16 00:12:04.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.36 0.08 927907.00 277.73 1009818.21 00:12:04.905 ======================================================== 00:12:04.905 Total : 331.15 0.16 904462.38 243.83 1009818.21 00:12:04.905 00:12:04.905 [2024-07-15 11:23:33.590871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa7ac0 (9): Bad file descriptor 00:12:04.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:04.905 11:23:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.905 11:23:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:04.905 11:23:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3439612 00:12:04.906 11:23:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3439612 00:12:05.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3439612) - No such process 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3439612 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3439612 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3439612 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.477 [2024-07-15 11:23:34.120285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3440293 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:05.477 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:05.477 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.737 [2024-07-15 11:23:34.190126] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:05.998 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:05.998 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:05.998 11:23:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:06.569 11:23:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:06.569 11:23:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:06.569 11:23:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.146 11:23:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:07.146 11:23:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:07.146 11:23:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.717 11:23:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:07.717 11:23:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:07.717 11:23:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.978 11:23:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:07.978 11:23:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:07.978 11:23:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:08.549 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:08.549 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:08.549 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:08.809 Initializing NVMe Controllers 00:12:08.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:08.809 Controller IO queue size 128, less than required. 00:12:08.809 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:08.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:08.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:08.809 Initialization complete. Launching workers. 00:12:08.809 ======================================================== 00:12:08.809 Latency(us) 00:12:08.809 Device Information : IOPS MiB/s Average min max 00:12:08.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002122.98 1000126.92 1007035.80 00:12:08.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002957.13 1000345.23 1009423.51 00:12:08.809 ======================================================== 00:12:08.809 Total : 256.00 0.12 1002540.06 1000126.92 1009423.51 00:12:08.809 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3440293 00:12:09.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3440293) - No such process 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3440293 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.069 rmmod nvme_tcp 00:12:09.069 rmmod nvme_fabrics 00:12:09.069 rmmod nvme_keyring 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3439403 ']' 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3439403 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3439403 ']' 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3439403 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.069 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439403 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439403' 00:12:09.363 killing process with pid 3439403 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3439403 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3439403 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.363 11:23:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.925 11:23:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.925 00:12:11.925 real 0m17.596s 00:12:11.925 user 0m30.535s 00:12:11.925 sys 0m6.075s 00:12:11.925 11:23:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.925 11:23:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 ************************************ 00:12:11.925 END TEST nvmf_delete_subsystem 00:12:11.925 ************************************ 00:12:11.925 11:23:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.925 11:23:40 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:11.925 11:23:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.925 11:23:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.925 11:23:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 ************************************ 00:12:11.925 START TEST nvmf_ns_masking 00:12:11.925 ************************************ 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:11.925 * Looking for test storage... 00:12:11.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4d9d6bce-a3b7-4634-842c-d3150fb22325 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c32c68d5-13e2-43d1-ab4e-77ad551d21a5 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0572bcd2-dcf6-4f47-ad07-ad11e08e54bc 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.925 11:23:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.517 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:18.518 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:18.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:18.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:18.518 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.518 11:23:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.518 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.518 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.518 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.518 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.518 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.518 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:12:18.779 00:12:18.779 --- 10.0.0.2 ping statistics --- 00:12:18.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.779 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:12:18.779 00:12:18.779 --- 10.0.0.1 ping statistics --- 00:12:18.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.779 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3445299 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3445299 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3445299 ']' 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.779 11:23:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:18.779 [2024-07-15 11:23:47.361040] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:12:18.779 [2024-07-15 11:23:47.361117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.779 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.779 [2024-07-15 11:23:47.432953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.040 [2024-07-15 11:23:47.506078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.040 [2024-07-15 11:23:47.506115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.040 [2024-07-15 11:23:47.506128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.040 [2024-07-15 11:23:47.506135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.040 [2024-07-15 11:23:47.506140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.040 [2024-07-15 11:23:47.506161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.611 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:19.611 [2024-07-15 11:23:48.293184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.873 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:19.873 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:19.873 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:19.873 Malloc1 00:12:19.873 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:20.133 Malloc2 00:12:20.133 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.393 11:23:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:20.393 11:23:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.653 [2024-07-15 11:23:49.155196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.653 11:23:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:20.653 11:23:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0572bcd2-dcf6-4f47-ad07-ad11e08e54bc -a 10.0.0.2 -s 4420 -i 4 00:12:20.913 11:23:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.913 11:23:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:20.913 11:23:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.913 11:23:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:20.913 11:23:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:22.824 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:22.824 [ 0]:0x1 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=851c5906f8df489496f8297dd44644b2 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 851c5906f8df489496f8297dd44644b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.084 [ 0]:0x1 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.084 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=851c5906f8df489496f8297dd44644b2 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 851c5906f8df489496f8297dd44644b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:23.344 [ 1]:0x2 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.344 11:23:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.604 11:23:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:23.604 11:23:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:23.604 11:23:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0572bcd2-dcf6-4f47-ad07-ad11e08e54bc -a 10.0.0.2 -s 4420 -i 4 00:12:23.865 11:23:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:23.865 11:23:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.865 11:23:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.865 11:23:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:23.865 11:23:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:23.865 11:23:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:25.778 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:26.039 [ 0]:0x2 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:26.039 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:26.300 [ 0]:0x1 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=851c5906f8df489496f8297dd44644b2 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 851c5906f8df489496f8297dd44644b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:26.300 [ 1]:0x2 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:26.300 11:23:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:26.561 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:26.562 [ 0]:0x2 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.562 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:26.823 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:26.823 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0572bcd2-dcf6-4f47-ad07-ad11e08e54bc -a 10.0.0.2 -s 4420 -i 4 00:12:27.084 11:23:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:27.084 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.084 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.084 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:27.084 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:27.084 11:23:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:28.998 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.259 [ 0]:0x1 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=851c5906f8df489496f8297dd44644b2 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 851c5906f8df489496f8297dd44644b2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:29.259 [ 1]:0x2 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:29.259 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.520 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:29.520 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.520 11:23:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:29.520 [ 0]:0x2 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:29.520 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:29.782 [2024-07-15 11:23:58.385896] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:29.782 request: 00:12:29.782 { 00:12:29.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:29.782 "nsid": 2, 00:12:29.782 "host": "nqn.2016-06.io.spdk:host1", 00:12:29.782 "method": "nvmf_ns_remove_host", 00:12:29.782 "req_id": 1 00:12:29.782 } 00:12:29.782 Got JSON-RPC error response 00:12:29.782 response: 00:12:29.782 { 00:12:29.782 "code": -32602, 00:12:29.782 "message": "Invalid parameters" 00:12:29.782 } 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.782 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.044 [ 0]:0x2 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b235be5cf5304c3ea695abb1f87bcf89 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b235be5cf5304c3ea695abb1f87bcf89 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3447486 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3447486 /var/tmp/host.sock 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3447486 ']' 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:30.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.044 11:23:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:30.044 [2024-07-15 11:23:58.634761] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:12:30.044 [2024-07-15 11:23:58.634813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447486 ] 00:12:30.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.044 [2024-07-15 11:23:58.709323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.305 [2024-07-15 11:23:58.773649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.878 11:23:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.878 11:23:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:30.878 11:23:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.878 11:23:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:31.138 11:23:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4d9d6bce-a3b7-4634-842c-d3150fb22325 00:12:31.138 11:23:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:31.138 11:23:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4D9D6BCEA3B74634842CD3150FB22325 -i 00:12:31.399 11:23:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c32c68d5-13e2-43d1-ab4e-77ad551d21a5 00:12:31.399 11:23:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:31.399 11:23:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C32C68D513E243D1AB4E77AD551D21A5 -i 00:12:31.399 11:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.660 11:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:31.921 11:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:31.921 11:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:32.182 nvme0n1 00:12:32.182 11:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:32.182 11:24:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:32.754 nvme1n2 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:32.754 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4d9d6bce-a3b7-4634-842c-d3150fb22325 == \4\d\9\d\6\b\c\e\-\a\3\b\7\-\4\6\3\4\-\8\4\2\c\-\d\3\1\5\0\f\b\2\2\3\2\5 ]] 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c32c68d5-13e2-43d1-ab4e-77ad551d21a5 == \c\3\2\c\6\8\d\5\-\1\3\e\2\-\4\3\d\1\-\a\b\4\e\-\7\7\a\d\5\5\1\d\2\1\a\5 ]] 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3447486 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3447486 ']' 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3447486 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3447486 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:33.015 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:33.279 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3447486' 00:12:33.279 killing process with pid 3447486 00:12:33.279 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3447486 00:12:33.279 11:24:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3447486 00:12:33.279 11:24:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.539 11:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:33.539 11:24:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:33.539 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.539 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.540 rmmod nvme_tcp 00:12:33.540 rmmod nvme_fabrics 00:12:33.540 rmmod nvme_keyring 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3445299 ']' 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3445299 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3445299 ']' 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3445299 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3445299 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3445299' 00:12:33.540 killing process with pid 3445299 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3445299 00:12:33.540 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3445299 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.800 11:24:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.346 11:24:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.346 00:12:36.346 real 0m24.367s 00:12:36.346 user 0m24.585s 00:12:36.346 sys 0m7.275s 00:12:36.346 11:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.346 11:24:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 ************************************ 00:12:36.346 END TEST nvmf_ns_masking 00:12:36.346 ************************************ 00:12:36.346 11:24:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:36.346 11:24:04 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:36.346 11:24:04 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:36.346 11:24:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.346 11:24:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.346 11:24:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.346 ************************************ 00:12:36.346 START TEST nvmf_nvme_cli 00:12:36.346 ************************************ 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:36.346 * Looking for test storage... 00:12:36.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.346 11:24:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:42.938 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:42.938 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:42.938 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.938 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:42.939 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.939 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:43.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:12:43.200 00:12:43.200 --- 10.0.0.2 ping statistics --- 00:12:43.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.200 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:12:43.200 00:12:43.200 --- 10.0.0.1 ping statistics --- 00:12:43.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.200 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3453070 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3453070 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3453070 ']' 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.200 11:24:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:43.462 [2024-07-15 11:24:11.919110] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:12:43.462 [2024-07-15 11:24:11.919183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.462 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.462 [2024-07-15 11:24:11.991357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.462 [2024-07-15 11:24:12.066959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.462 [2024-07-15 11:24:12.067000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.462 [2024-07-15 11:24:12.067008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.462 [2024-07-15 11:24:12.067014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.462 [2024-07-15 11:24:12.067020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.462 [2024-07-15 11:24:12.067169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.462 [2024-07-15 11:24:12.067269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.462 [2024-07-15 11:24:12.067428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.462 [2024-07-15 11:24:12.067429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.032 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.032 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:44.032 11:24:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.032 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.032 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 [2024-07-15 11:24:12.740769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 Malloc0 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 Malloc1 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 [2024-07-15 11:24:12.830543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.292 11:24:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:44.552 00:12:44.552 Discovery Log Number of Records 2, Generation counter 2 00:12:44.552 =====Discovery Log Entry 0====== 00:12:44.552 trtype: tcp 00:12:44.552 adrfam: ipv4 00:12:44.552 subtype: current discovery subsystem 00:12:44.552 treq: not required 00:12:44.552 portid: 0 00:12:44.553 trsvcid: 4420 00:12:44.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:44.553 traddr: 10.0.0.2 00:12:44.553 eflags: explicit discovery connections, duplicate discovery information 00:12:44.553 sectype: none 00:12:44.553 =====Discovery Log Entry 1====== 00:12:44.553 trtype: tcp 00:12:44.553 adrfam: ipv4 00:12:44.553 subtype: nvme subsystem 00:12:44.553 treq: not required 00:12:44.553 portid: 0 00:12:44.553 trsvcid: 4420 00:12:44.553 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:44.553 traddr: 10.0.0.2 00:12:44.553 eflags: none 00:12:44.553 sectype: none 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:44.553 11:24:13 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.941 11:24:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:45.941 11:24:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.941 11:24:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.941 11:24:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:45.941 11:24:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:45.941 11:24:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:48.483 /dev/nvme0n1 ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.483 rmmod nvme_tcp 00:12:48.483 rmmod nvme_fabrics 00:12:48.483 rmmod nvme_keyring 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3453070 ']' 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3453070 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3453070 ']' 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3453070 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3453070 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3453070' 00:12:48.483 killing process with pid 3453070 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3453070 00:12:48.483 11:24:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3453070 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.483 11:24:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.023 11:24:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:51.023 00:12:51.023 real 0m14.610s 00:12:51.023 user 0m22.037s 00:12:51.023 sys 0m5.936s 00:12:51.023 11:24:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.023 11:24:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.023 ************************************ 00:12:51.023 END TEST nvmf_nvme_cli 00:12:51.023 ************************************ 00:12:51.023 11:24:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:51.023 11:24:19 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:51.023 11:24:19 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:51.023 11:24:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.023 11:24:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.024 11:24:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 ************************************ 00:12:51.024 START TEST nvmf_vfio_user 00:12:51.024 ************************************ 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:51.024 * Looking for test storage... 00:12:51.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3454551 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3454551' 00:12:51.024 Process pid: 3454551 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3454551 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3454551 ']' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.024 11:24:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 [2024-07-15 11:24:19.426358] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:12:51.024 [2024-07-15 11:24:19.426427] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.024 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.024 [2024-07-15 11:24:19.491329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.024 [2024-07-15 11:24:19.556999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.024 [2024-07-15 11:24:19.557038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.024 [2024-07-15 11:24:19.557045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.024 [2024-07-15 11:24:19.557052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.024 [2024-07-15 11:24:19.557057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.024 [2024-07-15 11:24:19.557161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.024 [2024-07-15 11:24:19.557384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.024 [2024-07-15 11:24:19.557385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.024 [2024-07-15 11:24:19.557236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.595 11:24:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.595 11:24:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:51.596 11:24:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:52.537 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:52.797 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:52.797 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:52.797 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.798 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:52.798 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:53.058 Malloc1 00:12:53.058 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:53.058 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:53.318 11:24:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:53.578 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.578 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:53.578 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:53.578 Malloc2 00:12:53.578 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:53.838 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:54.099 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:54.099 [2024-07-15 11:24:22.783173] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:12:54.099 [2024-07-15 11:24:22.783215] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455263 ] 00:12:54.099 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.361 [2024-07-15 11:24:22.814762] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:54.362 [2024-07-15 11:24:22.820135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:54.362 [2024-07-15 11:24:22.820155] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3e85b14000 00:12:54.362 [2024-07-15 11:24:22.821137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.822138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.823142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.824146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.825151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.826160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.827155] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.828166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.362 [2024-07-15 11:24:22.829177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:54.362 [2024-07-15 11:24:22.829191] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3e85b09000 00:12:54.362 [2024-07-15 11:24:22.830520] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.362 [2024-07-15 11:24:22.851453] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:54.362 [2024-07-15 11:24:22.851482] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:54.362 [2024-07-15 11:24:22.854314] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:54.362 [2024-07-15 11:24:22.854366] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:54.362 [2024-07-15 11:24:22.854455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:54.362 [2024-07-15 11:24:22.854473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:54.362 [2024-07-15 11:24:22.854478] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:54.362 [2024-07-15 11:24:22.855316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:54.362 [2024-07-15 11:24:22.855326] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:54.362 [2024-07-15 11:24:22.855333] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:54.362 [2024-07-15 11:24:22.856324] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:54.362 [2024-07-15 11:24:22.856334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:54.362 [2024-07-15 11:24:22.856342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:54.362 [2024-07-15 11:24:22.857333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:54.362 [2024-07-15 11:24:22.857342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:54.362 [2024-07-15 11:24:22.858340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:54.362 [2024-07-15 11:24:22.858351] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:54.362 [2024-07-15 11:24:22.858356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:54.362 [2024-07-15 11:24:22.858363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:54.362 [2024-07-15 11:24:22.858471] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:54.362 [2024-07-15 11:24:22.858476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:54.362 [2024-07-15 11:24:22.858481] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:54.362 [2024-07-15 11:24:22.859338] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:54.362 [2024-07-15 11:24:22.860353] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:54.362 [2024-07-15 11:24:22.861355] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:54.362 [2024-07-15 11:24:22.862352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:54.362 [2024-07-15 11:24:22.862428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:54.362 [2024-07-15 11:24:22.863366] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:54.362 [2024-07-15 11:24:22.863375] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:54.362 [2024-07-15 11:24:22.863380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863401] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:54.362 [2024-07-15 11:24:22.863408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863424] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.362 [2024-07-15 11:24:22.863430] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.362 [2024-07-15 11:24:22.863444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.362 [2024-07-15 11:24:22.863478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:54.362 [2024-07-15 11:24:22.863488] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:54.362 [2024-07-15 11:24:22.863494] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:54.362 [2024-07-15 11:24:22.863499] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:54.362 [2024-07-15 11:24:22.863503] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:54.362 [2024-07-15 11:24:22.863508] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:54.362 [2024-07-15 11:24:22.863513] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:54.362 [2024-07-15 11:24:22.863517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:54.362 [2024-07-15 11:24:22.863542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:54.362 [2024-07-15 11:24:22.863556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.362 [2024-07-15 11:24:22.863565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.362 [2024-07-15 11:24:22.863575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.362 [2024-07-15 11:24:22.863583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.362 [2024-07-15 11:24:22.863588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:54.362 [2024-07-15 11:24:22.863614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:54.362 [2024-07-15 11:24:22.863619] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:54.362 [2024-07-15 11:24:22.863624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.362 [2024-07-15 11:24:22.863653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:54.362 [2024-07-15 11:24:22.863713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863728] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:54.362 [2024-07-15 11:24:22.863732] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:54.362 [2024-07-15 11:24:22.863738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:54.362 [2024-07-15 11:24:22.863752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:54.362 [2024-07-15 11:24:22.863761] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:54.362 [2024-07-15 11:24:22.863774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:54.362 [2024-07-15 11:24:22.863788] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.362 [2024-07-15 11:24:22.863793] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.363 [2024-07-15 11:24:22.863798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.863814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.863827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863843] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.363 [2024-07-15 11:24:22.863847] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.363 [2024-07-15 11:24:22.863853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.863863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.863870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863905] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:54.363 [2024-07-15 11:24:22.863910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:54.363 [2024-07-15 11:24:22.863915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:54.363 [2024-07-15 11:24:22.863932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.863942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.863953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.863976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.863988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.863999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.864008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.864020] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:54.363 [2024-07-15 11:24:22.864025] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:54.363 [2024-07-15 11:24:22.864029] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:54.363 [2024-07-15 11:24:22.864032] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:54.363 [2024-07-15 11:24:22.864040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:54.363 [2024-07-15 11:24:22.864048] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:54.363 [2024-07-15 11:24:22.864052] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:54.363 [2024-07-15 11:24:22.864058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.864065] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:54.363 [2024-07-15 11:24:22.864069] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.363 [2024-07-15 11:24:22.864075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.864083] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:54.363 [2024-07-15 11:24:22.864087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:54.363 [2024-07-15 11:24:22.864093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:54.363 [2024-07-15 11:24:22.864100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.864111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.864127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:54.363 [2024-07-15 11:24:22.864134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:54.363 ===================================================== 00:12:54.363 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:54.363 ===================================================== 00:12:54.363 Controller Capabilities/Features 00:12:54.363 ================================ 00:12:54.363 Vendor ID: 4e58 00:12:54.363 Subsystem Vendor ID: 4e58 00:12:54.363 Serial Number: SPDK1 00:12:54.363 Model Number: SPDK bdev Controller 00:12:54.363 Firmware Version: 24.09 00:12:54.363 Recommended Arb Burst: 6 00:12:54.363 IEEE OUI Identifier: 8d 6b 50 00:12:54.363 Multi-path I/O 00:12:54.363 May have multiple subsystem ports: Yes 00:12:54.363 May have multiple controllers: Yes 00:12:54.363 Associated with SR-IOV VF: No 00:12:54.363 Max Data Transfer Size: 131072 00:12:54.363 Max Number of Namespaces: 32 00:12:54.363 Max Number of I/O Queues: 127 00:12:54.363 NVMe Specification Version (VS): 1.3 00:12:54.363 NVMe Specification Version (Identify): 1.3 00:12:54.363 Maximum Queue Entries: 256 00:12:54.363 Contiguous Queues Required: Yes 00:12:54.363 Arbitration Mechanisms Supported 00:12:54.363 Weighted Round Robin: Not Supported 00:12:54.363 Vendor Specific: Not Supported 00:12:54.363 Reset Timeout: 15000 ms 00:12:54.363 Doorbell Stride: 4 bytes 00:12:54.363 NVM Subsystem Reset: Not Supported 00:12:54.363 Command Sets Supported 00:12:54.363 NVM Command Set: Supported 00:12:54.363 Boot Partition: Not Supported 00:12:54.363 Memory Page Size Minimum: 4096 bytes 00:12:54.363 Memory Page Size Maximum: 4096 bytes 00:12:54.363 Persistent Memory Region: Not Supported 00:12:54.363 Optional Asynchronous Events Supported 00:12:54.363 Namespace Attribute Notices: Supported 00:12:54.363 Firmware Activation Notices: Not Supported 00:12:54.363 ANA Change Notices: Not Supported 00:12:54.363 PLE Aggregate Log Change Notices: Not Supported 00:12:54.363 LBA Status Info Alert Notices: Not Supported 00:12:54.363 EGE Aggregate Log Change Notices: Not Supported 00:12:54.363 Normal NVM Subsystem Shutdown event: Not Supported 00:12:54.363 Zone Descriptor Change Notices: Not Supported 00:12:54.363 Discovery Log Change Notices: Not Supported 00:12:54.363 Controller Attributes 00:12:54.363 128-bit Host Identifier: Supported 00:12:54.363 Non-Operational Permissive Mode: Not Supported 00:12:54.363 NVM Sets: Not Supported 00:12:54.363 Read Recovery Levels: Not Supported 00:12:54.363 Endurance Groups: Not Supported 00:12:54.363 Predictable Latency Mode: Not Supported 00:12:54.363 Traffic Based Keep ALive: Not Supported 00:12:54.363 Namespace Granularity: Not Supported 00:12:54.363 SQ Associations: Not Supported 00:12:54.363 UUID List: Not Supported 00:12:54.363 Multi-Domain Subsystem: Not Supported 00:12:54.363 Fixed Capacity Management: Not Supported 00:12:54.363 Variable Capacity Management: Not Supported 00:12:54.363 Delete Endurance Group: Not Supported 00:12:54.363 Delete NVM Set: Not Supported 00:12:54.363 Extended LBA Formats Supported: Not Supported 00:12:54.363 Flexible Data Placement Supported: Not Supported 00:12:54.363 00:12:54.363 Controller Memory Buffer Support 00:12:54.363 ================================ 00:12:54.363 Supported: No 00:12:54.363 00:12:54.363 Persistent Memory Region Support 00:12:54.363 ================================ 00:12:54.363 Supported: No 00:12:54.363 00:12:54.363 Admin Command Set Attributes 00:12:54.363 ============================ 00:12:54.363 Security Send/Receive: Not Supported 00:12:54.363 Format NVM: Not Supported 00:12:54.363 Firmware Activate/Download: Not Supported 00:12:54.363 Namespace Management: Not Supported 00:12:54.363 Device Self-Test: Not Supported 00:12:54.363 Directives: Not Supported 00:12:54.363 NVMe-MI: Not Supported 00:12:54.363 Virtualization Management: Not Supported 00:12:54.363 Doorbell Buffer Config: Not Supported 00:12:54.363 Get LBA Status Capability: Not Supported 00:12:54.363 Command & Feature Lockdown Capability: Not Supported 00:12:54.363 Abort Command Limit: 4 00:12:54.363 Async Event Request Limit: 4 00:12:54.363 Number of Firmware Slots: N/A 00:12:54.363 Firmware Slot 1 Read-Only: N/A 00:12:54.363 Firmware Activation Without Reset: N/A 00:12:54.364 Multiple Update Detection Support: N/A 00:12:54.364 Firmware Update Granularity: No Information Provided 00:12:54.364 Per-Namespace SMART Log: No 00:12:54.364 Asymmetric Namespace Access Log Page: Not Supported 00:12:54.364 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:54.364 Command Effects Log Page: Supported 00:12:54.364 Get Log Page Extended Data: Supported 00:12:54.364 Telemetry Log Pages: Not Supported 00:12:54.364 Persistent Event Log Pages: Not Supported 00:12:54.364 Supported Log Pages Log Page: May Support 00:12:54.364 Commands Supported & Effects Log Page: Not Supported 00:12:54.364 Feature Identifiers & Effects Log Page:May Support 00:12:54.364 NVMe-MI Commands & Effects Log Page: May Support 00:12:54.364 Data Area 4 for Telemetry Log: Not Supported 00:12:54.364 Error Log Page Entries Supported: 128 00:12:54.364 Keep Alive: Supported 00:12:54.364 Keep Alive Granularity: 10000 ms 00:12:54.364 00:12:54.364 NVM Command Set Attributes 00:12:54.364 ========================== 00:12:54.364 Submission Queue Entry Size 00:12:54.364 Max: 64 00:12:54.364 Min: 64 00:12:54.364 Completion Queue Entry Size 00:12:54.364 Max: 16 00:12:54.364 Min: 16 00:12:54.364 Number of Namespaces: 32 00:12:54.364 Compare Command: Supported 00:12:54.364 Write Uncorrectable Command: Not Supported 00:12:54.364 Dataset Management Command: Supported 00:12:54.364 Write Zeroes Command: Supported 00:12:54.364 Set Features Save Field: Not Supported 00:12:54.364 Reservations: Not Supported 00:12:54.364 Timestamp: Not Supported 00:12:54.364 Copy: Supported 00:12:54.364 Volatile Write Cache: Present 00:12:54.364 Atomic Write Unit (Normal): 1 00:12:54.364 Atomic Write Unit (PFail): 1 00:12:54.364 Atomic Compare & Write Unit: 1 00:12:54.364 Fused Compare & Write: Supported 00:12:54.364 Scatter-Gather List 00:12:54.364 SGL Command Set: Supported (Dword aligned) 00:12:54.364 SGL Keyed: Not Supported 00:12:54.364 SGL Bit Bucket Descriptor: Not Supported 00:12:54.364 SGL Metadata Pointer: Not Supported 00:12:54.364 Oversized SGL: Not Supported 00:12:54.364 SGL Metadata Address: Not Supported 00:12:54.364 SGL Offset: Not Supported 00:12:54.364 Transport SGL Data Block: Not Supported 00:12:54.364 Replay Protected Memory Block: Not Supported 00:12:54.364 00:12:54.364 Firmware Slot Information 00:12:54.364 ========================= 00:12:54.364 Active slot: 1 00:12:54.364 Slot 1 Firmware Revision: 24.09 00:12:54.364 00:12:54.364 00:12:54.364 Commands Supported and Effects 00:12:54.364 ============================== 00:12:54.364 Admin Commands 00:12:54.364 -------------- 00:12:54.364 Get Log Page (02h): Supported 00:12:54.364 Identify (06h): Supported 00:12:54.364 Abort (08h): Supported 00:12:54.364 Set Features (09h): Supported 00:12:54.364 Get Features (0Ah): Supported 00:12:54.364 Asynchronous Event Request (0Ch): Supported 00:12:54.364 Keep Alive (18h): Supported 00:12:54.364 I/O Commands 00:12:54.364 ------------ 00:12:54.364 Flush (00h): Supported LBA-Change 00:12:54.364 Write (01h): Supported LBA-Change 00:12:54.364 Read (02h): Supported 00:12:54.364 Compare (05h): Supported 00:12:54.364 Write Zeroes (08h): Supported LBA-Change 00:12:54.364 Dataset Management (09h): Supported LBA-Change 00:12:54.364 Copy (19h): Supported LBA-Change 00:12:54.364 00:12:54.364 Error Log 00:12:54.364 ========= 00:12:54.364 00:12:54.364 Arbitration 00:12:54.364 =========== 00:12:54.364 Arbitration Burst: 1 00:12:54.364 00:12:54.364 Power Management 00:12:54.364 ================ 00:12:54.364 Number of Power States: 1 00:12:54.364 Current Power State: Power State #0 00:12:54.364 Power State #0: 00:12:54.364 Max Power: 0.00 W 00:12:54.364 Non-Operational State: Operational 00:12:54.364 Entry Latency: Not Reported 00:12:54.364 Exit Latency: Not Reported 00:12:54.364 Relative Read Throughput: 0 00:12:54.364 Relative Read Latency: 0 00:12:54.364 Relative Write Throughput: 0 00:12:54.364 Relative Write Latency: 0 00:12:54.364 Idle Power: Not Reported 00:12:54.364 Active Power: Not Reported 00:12:54.364 Non-Operational Permissive Mode: Not Supported 00:12:54.364 00:12:54.364 Health Information 00:12:54.364 ================== 00:12:54.364 Critical Warnings: 00:12:54.364 Available Spare Space: OK 00:12:54.364 Temperature: OK 00:12:54.364 Device Reliability: OK 00:12:54.364 Read Only: No 00:12:54.364 Volatile Memory Backup: OK 00:12:54.364 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:54.364 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:54.364 Available Spare: 0% 00:12:54.364 Available Sp[2024-07-15 11:24:22.864233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:54.364 [2024-07-15 11:24:22.864242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:54.364 [2024-07-15 11:24:22.864270] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:54.364 [2024-07-15 11:24:22.864279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.364 [2024-07-15 11:24:22.864286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.364 [2024-07-15 11:24:22.864292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.364 [2024-07-15 11:24:22.864298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.364 [2024-07-15 11:24:22.864371] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:54.364 [2024-07-15 11:24:22.864381] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:54.364 [2024-07-15 11:24:22.865374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:54.364 [2024-07-15 11:24:22.865416] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:54.364 [2024-07-15 11:24:22.865422] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:54.364 [2024-07-15 11:24:22.866384] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:54.364 [2024-07-15 11:24:22.866399] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:54.364 [2024-07-15 11:24:22.866464] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:54.364 [2024-07-15 11:24:22.871131] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.364 are Threshold: 0% 00:12:54.364 Life Percentage Used: 0% 00:12:54.364 Data Units Read: 0 00:12:54.364 Data Units Written: 0 00:12:54.364 Host Read Commands: 0 00:12:54.364 Host Write Commands: 0 00:12:54.364 Controller Busy Time: 0 minutes 00:12:54.364 Power Cycles: 0 00:12:54.364 Power On Hours: 0 hours 00:12:54.364 Unsafe Shutdowns: 0 00:12:54.364 Unrecoverable Media Errors: 0 00:12:54.364 Lifetime Error Log Entries: 0 00:12:54.364 Warning Temperature Time: 0 minutes 00:12:54.364 Critical Temperature Time: 0 minutes 00:12:54.364 00:12:54.364 Number of Queues 00:12:54.364 ================ 00:12:54.364 Number of I/O Submission Queues: 127 00:12:54.364 Number of I/O Completion Queues: 127 00:12:54.364 00:12:54.364 Active Namespaces 00:12:54.364 ================= 00:12:54.364 Namespace ID:1 00:12:54.364 Error Recovery Timeout: Unlimited 00:12:54.364 Command Set Identifier: NVM (00h) 00:12:54.364 Deallocate: Supported 00:12:54.364 Deallocated/Unwritten Error: Not Supported 00:12:54.364 Deallocated Read Value: Unknown 00:12:54.364 Deallocate in Write Zeroes: Not Supported 00:12:54.364 Deallocated Guard Field: 0xFFFF 00:12:54.364 Flush: Supported 00:12:54.364 Reservation: Supported 00:12:54.364 Namespace Sharing Capabilities: Multiple Controllers 00:12:54.364 Size (in LBAs): 131072 (0GiB) 00:12:54.364 Capacity (in LBAs): 131072 (0GiB) 00:12:54.364 Utilization (in LBAs): 131072 (0GiB) 00:12:54.364 NGUID: A9494A85FCED4CFCB40560656C88D8E7 00:12:54.364 UUID: a9494a85-fced-4cfc-b405-60656c88d8e7 00:12:54.364 Thin Provisioning: Not Supported 00:12:54.364 Per-NS Atomic Units: Yes 00:12:54.364 Atomic Boundary Size (Normal): 0 00:12:54.364 Atomic Boundary Size (PFail): 0 00:12:54.364 Atomic Boundary Offset: 0 00:12:54.364 Maximum Single Source Range Length: 65535 00:12:54.364 Maximum Copy Length: 65535 00:12:54.364 Maximum Source Range Count: 1 00:12:54.364 NGUID/EUI64 Never Reused: No 00:12:54.364 Namespace Write Protected: No 00:12:54.364 Number of LBA Formats: 1 00:12:54.364 Current LBA Format: LBA Format #00 00:12:54.364 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:54.364 00:12:54.365 11:24:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:54.365 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.365 [2024-07-15 11:24:23.053751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.691 Initializing NVMe Controllers 00:12:59.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:59.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:59.691 Initialization complete. Launching workers. 00:12:59.691 ======================================================== 00:12:59.691 Latency(us) 00:12:59.691 Device Information : IOPS MiB/s Average min max 00:12:59.691 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40051.93 156.45 3195.72 829.34 6823.67 00:12:59.691 ======================================================== 00:12:59.691 Total : 40051.93 156.45 3195.72 829.34 6823.67 00:12:59.691 00:12:59.691 [2024-07-15 11:24:28.072031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.691 11:24:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:59.691 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.691 [2024-07-15 11:24:28.254926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:04.979 Initializing NVMe Controllers 00:13:04.979 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:04.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:04.980 Initialization complete. Launching workers. 00:13:04.980 ======================================================== 00:13:04.980 Latency(us) 00:13:04.980 Device Information : IOPS MiB/s Average min max 00:13:04.980 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.52 4987.98 10974.32 00:13:04.980 ======================================================== 00:13:04.980 Total : 16051.20 62.70 7980.52 4987.98 10974.32 00:13:04.980 00:13:04.980 [2024-07-15 11:24:33.289001] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:04.980 11:24:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:04.980 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.980 [2024-07-15 11:24:33.472842] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:10.267 [2024-07-15 11:24:38.533296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:10.267 Initializing NVMe Controllers 00:13:10.267 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:10.267 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:10.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:10.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:10.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:10.267 Initialization complete. Launching workers. 00:13:10.267 Starting thread on core 2 00:13:10.267 Starting thread on core 3 00:13:10.267 Starting thread on core 1 00:13:10.267 11:24:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:10.267 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.267 [2024-07-15 11:24:38.789516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.567 [2024-07-15 11:24:41.854143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.567 Initializing NVMe Controllers 00:13:13.567 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:13.567 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:13.567 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:13.567 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:13.567 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:13.567 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:13.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:13.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:13.567 Initialization complete. Launching workers. 00:13:13.567 Starting thread on core 1 with urgent priority queue 00:13:13.567 Starting thread on core 2 with urgent priority queue 00:13:13.567 Starting thread on core 3 with urgent priority queue 00:13:13.567 Starting thread on core 0 with urgent priority queue 00:13:13.567 SPDK bdev Controller (SPDK1 ) core 0: 9807.67 IO/s 10.20 secs/100000 ios 00:13:13.567 SPDK bdev Controller (SPDK1 ) core 1: 10436.00 IO/s 9.58 secs/100000 ios 00:13:13.567 SPDK bdev Controller (SPDK1 ) core 2: 10093.00 IO/s 9.91 secs/100000 ios 00:13:13.567 SPDK bdev Controller (SPDK1 ) core 3: 13015.00 IO/s 7.68 secs/100000 ios 00:13:13.567 ======================================================== 00:13:13.567 00:13:13.567 11:24:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:13.567 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.567 [2024-07-15 11:24:42.115597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.567 Initializing NVMe Controllers 00:13:13.567 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:13.567 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:13.567 Namespace ID: 1 size: 0GB 00:13:13.567 Initialization complete. 00:13:13.567 INFO: using host memory buffer for IO 00:13:13.567 Hello world! 00:13:13.567 [2024-07-15 11:24:42.149779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.567 11:24:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:13.567 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.837 [2024-07-15 11:24:42.414533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:14.802 Initializing NVMe Controllers 00:13:14.802 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:14.802 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:14.802 Initialization complete. Launching workers. 00:13:14.802 submit (in ns) avg, min, max = 7751.5, 3903.3, 4000432.5 00:13:14.802 complete (in ns) avg, min, max = 19322.2, 2384.2, 4041692.5 00:13:14.802 00:13:14.802 Submit histogram 00:13:14.802 ================ 00:13:14.802 Range in us Cumulative Count 00:13:14.802 3.893 - 3.920: 0.7500% ( 146) 00:13:14.802 3.920 - 3.947: 6.0773% ( 1037) 00:13:14.802 3.947 - 3.973: 15.7660% ( 1886) 00:13:14.802 3.973 - 4.000: 27.1961% ( 2225) 00:13:14.802 4.000 - 4.027: 37.7530% ( 2055) 00:13:14.802 4.027 - 4.053: 49.3938% ( 2266) 00:13:14.802 4.053 - 4.080: 65.9252% ( 3218) 00:13:14.802 4.080 - 4.107: 80.2476% ( 2788) 00:13:14.802 4.107 - 4.133: 90.8353% ( 2061) 00:13:14.802 4.133 - 4.160: 96.2242% ( 1049) 00:13:14.802 4.160 - 4.187: 98.4383% ( 431) 00:13:14.802 4.187 - 4.213: 99.1883% ( 146) 00:13:14.802 4.213 - 4.240: 99.4349% ( 48) 00:13:14.802 4.240 - 4.267: 99.4966% ( 12) 00:13:14.802 4.267 - 4.293: 99.5222% ( 5) 00:13:14.802 4.293 - 4.320: 99.5274% ( 1) 00:13:14.802 4.373 - 4.400: 99.5325% ( 1) 00:13:14.802 4.427 - 4.453: 99.5377% ( 1) 00:13:14.802 4.507 - 4.533: 99.5428% ( 1) 00:13:14.802 4.560 - 4.587: 99.5479% ( 1) 00:13:14.802 4.613 - 4.640: 99.5531% ( 1) 00:13:14.802 4.853 - 4.880: 99.5582% ( 1) 00:13:14.802 4.960 - 4.987: 99.5633% ( 1) 00:13:14.802 5.200 - 5.227: 99.5685% ( 1) 00:13:14.802 5.280 - 5.307: 99.5736% ( 1) 00:13:14.802 5.360 - 5.387: 99.5839% ( 2) 00:13:14.802 5.440 - 5.467: 99.5890% ( 1) 00:13:14.802 5.467 - 5.493: 99.5942% ( 1) 00:13:14.802 5.547 - 5.573: 99.5993% ( 1) 00:13:14.802 5.680 - 5.707: 99.6044% ( 1) 00:13:14.802 5.733 - 5.760: 99.6096% ( 1) 00:13:14.802 6.000 - 6.027: 99.6147% ( 1) 00:13:14.802 6.027 - 6.053: 99.6198% ( 1) 00:13:14.802 6.080 - 6.107: 99.6301% ( 2) 00:13:14.802 6.107 - 6.133: 99.6404% ( 2) 00:13:14.802 6.133 - 6.160: 99.6455% ( 1) 00:13:14.802 6.160 - 6.187: 99.6507% ( 1) 00:13:14.802 6.187 - 6.213: 99.6609% ( 2) 00:13:14.802 6.213 - 6.240: 99.6661% ( 1) 00:13:14.802 6.240 - 6.267: 99.6712% ( 1) 00:13:14.802 6.267 - 6.293: 99.6764% ( 1) 00:13:14.802 6.373 - 6.400: 99.6918% ( 3) 00:13:14.802 6.453 - 6.480: 99.7020% ( 2) 00:13:14.802 6.480 - 6.507: 99.7072% ( 1) 00:13:14.802 6.507 - 6.533: 99.7123% ( 1) 00:13:14.802 6.560 - 6.587: 99.7175% ( 1) 00:13:14.802 6.587 - 6.613: 99.7226% ( 1) 00:13:14.802 6.720 - 6.747: 99.7329% ( 2) 00:13:14.802 6.773 - 6.800: 99.7431% ( 2) 00:13:14.802 6.827 - 6.880: 99.7483% ( 1) 00:13:14.802 6.880 - 6.933: 99.7534% ( 1) 00:13:14.802 6.933 - 6.987: 99.7586% ( 1) 00:13:14.802 6.987 - 7.040: 99.7740% ( 3) 00:13:14.802 7.147 - 7.200: 99.7791% ( 1) 00:13:14.802 7.200 - 7.253: 99.7894% ( 2) 00:13:14.802 7.253 - 7.307: 99.7997% ( 2) 00:13:14.802 7.360 - 7.413: 99.8048% ( 1) 00:13:14.802 7.413 - 7.467: 99.8099% ( 1) 00:13:14.802 7.467 - 7.520: 99.8202% ( 2) 00:13:14.802 7.520 - 7.573: 99.8305% ( 2) 00:13:14.802 7.627 - 7.680: 99.8356% ( 1) 00:13:14.802 7.680 - 7.733: 99.8459% ( 2) 00:13:14.802 7.733 - 7.787: 99.8562% ( 2) 00:13:14.802 7.893 - 7.947: 99.8613% ( 1) 00:13:14.802 8.053 - 8.107: 99.8716% ( 2) 00:13:14.802 8.107 - 8.160: 99.8767% ( 1) 00:13:14.802 8.213 - 8.267: 99.8870% ( 2) 00:13:14.802 8.427 - 8.480: 99.8973% ( 2) 00:13:14.802 12.373 - 12.427: 99.9024% ( 1) 00:13:14.802 14.720 - 14.827: 99.9075% ( 1) 00:13:14.802 3986.773 - 4014.080: 100.0000% ( 18) 00:13:14.802 00:13:14.802 Complete histogram 00:13:14.802 ================== 00:13:14.802 Range in us Cumulative Count 00:13:14.802 2.373 - 2.387: 0.0103% ( 2) 00:13:14.802 2.387 - 2.400: 0.7500% ( 144) 00:13:14.802 2.400 - 2.413: 0.8476% ( 19) 00:13:14.802 2.413 - 2.427: 1.0069% ( 31) 00:13:14.802 2.427 - 2.440: 1.0480% ( 8) 00:13:14.802 2.440 - 2.453: 34.9738% ( 6604) 00:13:14.802 2.453 - [2024-07-15 11:24:43.437925] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.062 2.467: 56.7194% ( 4233) 00:13:15.062 2.467 - 2.480: 68.3397% ( 2262) 00:13:15.062 2.480 - 2.493: 77.5557% ( 1794) 00:13:15.062 2.493 - 2.507: 81.3367% ( 736) 00:13:15.062 2.507 - 2.520: 83.8282% ( 485) 00:13:15.062 2.520 - 2.533: 90.0288% ( 1207) 00:13:15.062 2.533 - 2.547: 94.3645% ( 844) 00:13:15.062 2.547 - 2.560: 96.7636% ( 467) 00:13:15.062 2.560 - 2.573: 98.4794% ( 334) 00:13:15.062 2.573 - 2.587: 99.1986% ( 140) 00:13:15.062 2.587 - 2.600: 99.3013% ( 20) 00:13:15.062 2.600 - 2.613: 99.3270% ( 5) 00:13:15.062 2.613 - 2.627: 99.3322% ( 1) 00:13:15.062 4.507 - 4.533: 99.3424% ( 2) 00:13:15.062 4.613 - 4.640: 99.3476% ( 1) 00:13:15.062 4.640 - 4.667: 99.3527% ( 1) 00:13:15.062 4.693 - 4.720: 99.3579% ( 1) 00:13:15.062 4.747 - 4.773: 99.3681% ( 2) 00:13:15.062 4.773 - 4.800: 99.3784% ( 2) 00:13:15.062 4.800 - 4.827: 99.3835% ( 1) 00:13:15.062 4.827 - 4.853: 99.3990% ( 3) 00:13:15.062 4.853 - 4.880: 99.4041% ( 1) 00:13:15.062 4.960 - 4.987: 99.4092% ( 1) 00:13:15.062 4.987 - 5.013: 99.4144% ( 1) 00:13:15.062 5.147 - 5.173: 99.4195% ( 1) 00:13:15.062 5.200 - 5.227: 99.4246% ( 1) 00:13:15.062 5.253 - 5.280: 99.4298% ( 1) 00:13:15.062 5.307 - 5.333: 99.4349% ( 1) 00:13:15.062 5.333 - 5.360: 99.4452% ( 2) 00:13:15.062 5.467 - 5.493: 99.4503% ( 1) 00:13:15.062 5.493 - 5.520: 99.4606% ( 2) 00:13:15.062 5.520 - 5.547: 99.4657% ( 1) 00:13:15.062 5.573 - 5.600: 99.4709% ( 1) 00:13:15.062 5.653 - 5.680: 99.4760% ( 1) 00:13:15.062 5.840 - 5.867: 99.4811% ( 1) 00:13:15.062 5.867 - 5.893: 99.4863% ( 1) 00:13:15.062 6.187 - 6.213: 99.4914% ( 1) 00:13:15.062 6.240 - 6.267: 99.4966% ( 1) 00:13:15.062 6.267 - 6.293: 99.5068% ( 2) 00:13:15.062 6.507 - 6.533: 99.5120% ( 1) 00:13:15.062 6.613 - 6.640: 99.5171% ( 1) 00:13:15.062 6.640 - 6.667: 99.5274% ( 2) 00:13:15.062 6.693 - 6.720: 99.5325% ( 1) 00:13:15.062 6.720 - 6.747: 99.5377% ( 1) 00:13:15.062 7.093 - 7.147: 99.5428% ( 1) 00:13:15.062 7.200 - 7.253: 99.5479% ( 1) 00:13:15.062 8.800 - 8.853: 99.5531% ( 1) 00:13:15.062 9.013 - 9.067: 99.5582% ( 1) 00:13:15.062 11.787 - 11.840: 99.5633% ( 1) 00:13:15.062 14.400 - 14.507: 99.5685% ( 1) 00:13:15.062 44.160 - 44.373: 99.5736% ( 1) 00:13:15.062 151.893 - 152.747: 99.5788% ( 1) 00:13:15.062 3986.773 - 4014.080: 99.9949% ( 81) 00:13:15.062 4041.387 - 4068.693: 100.0000% ( 1) 00:13:15.062 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.062 [ 00:13:15.062 { 00:13:15.062 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.062 "subtype": "Discovery", 00:13:15.062 "listen_addresses": [], 00:13:15.062 "allow_any_host": true, 00:13:15.062 "hosts": [] 00:13:15.062 }, 00:13:15.062 { 00:13:15.062 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.062 "subtype": "NVMe", 00:13:15.062 "listen_addresses": [ 00:13:15.062 { 00:13:15.062 "trtype": "VFIOUSER", 00:13:15.062 "adrfam": "IPv4", 00:13:15.062 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.062 "trsvcid": "0" 00:13:15.062 } 00:13:15.062 ], 00:13:15.062 "allow_any_host": true, 00:13:15.062 "hosts": [], 00:13:15.062 "serial_number": "SPDK1", 00:13:15.062 "model_number": "SPDK bdev Controller", 00:13:15.062 "max_namespaces": 32, 00:13:15.062 "min_cntlid": 1, 00:13:15.062 "max_cntlid": 65519, 00:13:15.062 "namespaces": [ 00:13:15.062 { 00:13:15.062 "nsid": 1, 00:13:15.062 "bdev_name": "Malloc1", 00:13:15.062 "name": "Malloc1", 00:13:15.062 "nguid": "A9494A85FCED4CFCB40560656C88D8E7", 00:13:15.062 "uuid": "a9494a85-fced-4cfc-b405-60656c88d8e7" 00:13:15.062 } 00:13:15.062 ] 00:13:15.062 }, 00:13:15.062 { 00:13:15.062 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.062 "subtype": "NVMe", 00:13:15.062 "listen_addresses": [ 00:13:15.062 { 00:13:15.062 "trtype": "VFIOUSER", 00:13:15.062 "adrfam": "IPv4", 00:13:15.062 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.062 "trsvcid": "0" 00:13:15.062 } 00:13:15.062 ], 00:13:15.062 "allow_any_host": true, 00:13:15.062 "hosts": [], 00:13:15.062 "serial_number": "SPDK2", 00:13:15.062 "model_number": "SPDK bdev Controller", 00:13:15.062 "max_namespaces": 32, 00:13:15.062 "min_cntlid": 1, 00:13:15.062 "max_cntlid": 65519, 00:13:15.062 "namespaces": [ 00:13:15.062 { 00:13:15.062 "nsid": 1, 00:13:15.062 "bdev_name": "Malloc2", 00:13:15.062 "name": "Malloc2", 00:13:15.062 "nguid": "3DB7193A6950469D91F27F758E4899DB", 00:13:15.062 "uuid": "3db7193a-6950-469d-91f2-7f758e4899db" 00:13:15.062 } 00:13:15.062 ] 00:13:15.062 } 00:13:15.062 ] 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3459446 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:15.062 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:15.062 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.322 Malloc3 00:13:15.322 [2024-07-15 11:24:43.829545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.322 11:24:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:15.322 [2024-07-15 11:24:44.006719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.582 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.582 Asynchronous Event Request test 00:13:15.582 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.582 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.582 Registering asynchronous event callbacks... 00:13:15.582 Starting namespace attribute notice tests for all controllers... 00:13:15.582 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:15.582 aer_cb - Changed Namespace 00:13:15.582 Cleaning up... 00:13:15.582 [ 00:13:15.582 { 00:13:15.582 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.582 "subtype": "Discovery", 00:13:15.582 "listen_addresses": [], 00:13:15.582 "allow_any_host": true, 00:13:15.582 "hosts": [] 00:13:15.582 }, 00:13:15.582 { 00:13:15.582 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.582 "subtype": "NVMe", 00:13:15.582 "listen_addresses": [ 00:13:15.582 { 00:13:15.582 "trtype": "VFIOUSER", 00:13:15.582 "adrfam": "IPv4", 00:13:15.582 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.582 "trsvcid": "0" 00:13:15.582 } 00:13:15.582 ], 00:13:15.582 "allow_any_host": true, 00:13:15.582 "hosts": [], 00:13:15.582 "serial_number": "SPDK1", 00:13:15.582 "model_number": "SPDK bdev Controller", 00:13:15.582 "max_namespaces": 32, 00:13:15.582 "min_cntlid": 1, 00:13:15.582 "max_cntlid": 65519, 00:13:15.582 "namespaces": [ 00:13:15.582 { 00:13:15.582 "nsid": 1, 00:13:15.582 "bdev_name": "Malloc1", 00:13:15.582 "name": "Malloc1", 00:13:15.582 "nguid": "A9494A85FCED4CFCB40560656C88D8E7", 00:13:15.582 "uuid": "a9494a85-fced-4cfc-b405-60656c88d8e7" 00:13:15.582 }, 00:13:15.582 { 00:13:15.582 "nsid": 2, 00:13:15.582 "bdev_name": "Malloc3", 00:13:15.582 "name": "Malloc3", 00:13:15.582 "nguid": "DAB3296BB61942288C19FBE7C49990C0", 00:13:15.582 "uuid": "dab3296b-b619-4228-8c19-fbe7c49990c0" 00:13:15.582 } 00:13:15.582 ] 00:13:15.582 }, 00:13:15.582 { 00:13:15.582 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.582 "subtype": "NVMe", 00:13:15.582 "listen_addresses": [ 00:13:15.582 { 00:13:15.582 "trtype": "VFIOUSER", 00:13:15.582 "adrfam": "IPv4", 00:13:15.582 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.582 "trsvcid": "0" 00:13:15.582 } 00:13:15.582 ], 00:13:15.582 "allow_any_host": true, 00:13:15.582 "hosts": [], 00:13:15.582 "serial_number": "SPDK2", 00:13:15.582 "model_number": "SPDK bdev Controller", 00:13:15.582 "max_namespaces": 32, 00:13:15.582 "min_cntlid": 1, 00:13:15.582 "max_cntlid": 65519, 00:13:15.582 "namespaces": [ 00:13:15.582 { 00:13:15.582 "nsid": 1, 00:13:15.582 "bdev_name": "Malloc2", 00:13:15.582 "name": "Malloc2", 00:13:15.582 "nguid": "3DB7193A6950469D91F27F758E4899DB", 00:13:15.582 "uuid": "3db7193a-6950-469d-91f2-7f758e4899db" 00:13:15.582 } 00:13:15.582 ] 00:13:15.582 } 00:13:15.582 ] 00:13:15.582 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3459446 00:13:15.582 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:15.582 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:15.582 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:15.582 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:15.582 [2024-07-15 11:24:44.224390] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:13:15.582 [2024-07-15 11:24:44.224432] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459623 ] 00:13:15.582 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.582 [2024-07-15 11:24:44.255641] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:15.582 [2024-07-15 11:24:44.260866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:15.582 [2024-07-15 11:24:44.260885] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f058070a000 00:13:15.582 [2024-07-15 11:24:44.261858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.262862] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.263867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.264875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.265882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.266886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.267896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.268909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:15.582 [2024-07-15 11:24:44.269916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:15.582 [2024-07-15 11:24:44.269926] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f05806ff000 00:13:15.582 [2024-07-15 11:24:44.271256] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:15.845 [2024-07-15 11:24:44.292290] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:15.845 [2024-07-15 11:24:44.292314] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:15.845 [2024-07-15 11:24:44.294374] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:15.845 [2024-07-15 11:24:44.294417] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:15.845 [2024-07-15 11:24:44.294495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:15.845 [2024-07-15 11:24:44.294511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:15.845 [2024-07-15 11:24:44.294516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:15.845 [2024-07-15 11:24:44.295381] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:15.845 [2024-07-15 11:24:44.295390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:15.845 [2024-07-15 11:24:44.295398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:15.845 [2024-07-15 11:24:44.296384] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:15.845 [2024-07-15 11:24:44.296394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:15.845 [2024-07-15 11:24:44.296401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:15.845 [2024-07-15 11:24:44.297392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:15.845 [2024-07-15 11:24:44.297401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:15.845 [2024-07-15 11:24:44.298399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:15.845 [2024-07-15 11:24:44.298408] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:15.845 [2024-07-15 11:24:44.298413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:15.845 [2024-07-15 11:24:44.298419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:15.845 [2024-07-15 11:24:44.298525] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:15.845 [2024-07-15 11:24:44.298530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:15.845 [2024-07-15 11:24:44.298534] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:15.845 [2024-07-15 11:24:44.299406] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:15.845 [2024-07-15 11:24:44.300420] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:15.845 [2024-07-15 11:24:44.301423] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:15.845 [2024-07-15 11:24:44.302430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.845 [2024-07-15 11:24:44.302469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:15.845 [2024-07-15 11:24:44.303436] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:15.845 [2024-07-15 11:24:44.303444] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:15.845 [2024-07-15 11:24:44.303449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.303471] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:15.845 [2024-07-15 11:24:44.303482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.303494] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:15.845 [2024-07-15 11:24:44.303499] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:15.845 [2024-07-15 11:24:44.303512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.310131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.310142] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:15.845 [2024-07-15 11:24:44.310149] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:15.845 [2024-07-15 11:24:44.310154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:15.845 [2024-07-15 11:24:44.310158] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:15.845 [2024-07-15 11:24:44.310163] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:15.845 [2024-07-15 11:24:44.310170] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:15.845 [2024-07-15 11:24:44.310174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.310182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.310192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.318132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.318147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.845 [2024-07-15 11:24:44.318155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.845 [2024-07-15 11:24:44.318164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.845 [2024-07-15 11:24:44.318172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.845 [2024-07-15 11:24:44.318176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.318184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.318193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.326127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.326135] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:15.845 [2024-07-15 11:24:44.326140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.326147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.326152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.326161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.334137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.334199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.334207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.334215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:15.845 [2024-07-15 11:24:44.334219] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:15.845 [2024-07-15 11:24:44.334226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.342129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.342139] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:15.845 [2024-07-15 11:24:44.342148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.342155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.342162] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:15.845 [2024-07-15 11:24:44.342167] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:15.845 [2024-07-15 11:24:44.342173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.350128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.350142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.350149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.350157] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:15.845 [2024-07-15 11:24:44.350161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:15.845 [2024-07-15 11:24:44.350167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.358129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.358139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358174] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:15.845 [2024-07-15 11:24:44.358178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:15.845 [2024-07-15 11:24:44.358183] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:15.845 [2024-07-15 11:24:44.358199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.366128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.366142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.374128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.374141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.382128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.382141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.390129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.390145] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:15.845 [2024-07-15 11:24:44.390150] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:15.845 [2024-07-15 11:24:44.390154] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:15.845 [2024-07-15 11:24:44.390157] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:15.845 [2024-07-15 11:24:44.390163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:15.845 [2024-07-15 11:24:44.390171] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:15.845 [2024-07-15 11:24:44.390175] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:15.845 [2024-07-15 11:24:44.390181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.390188] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:15.845 [2024-07-15 11:24:44.390192] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:15.845 [2024-07-15 11:24:44.390198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.390206] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:15.845 [2024-07-15 11:24:44.390210] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:15.845 [2024-07-15 11:24:44.390216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:15.845 [2024-07-15 11:24:44.398129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.398144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.398154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:15.845 [2024-07-15 11:24:44.398161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:15.845 ===================================================== 00:13:15.845 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.845 ===================================================== 00:13:15.845 Controller Capabilities/Features 00:13:15.845 ================================ 00:13:15.845 Vendor ID: 4e58 00:13:15.845 Subsystem Vendor ID: 4e58 00:13:15.845 Serial Number: SPDK2 00:13:15.845 Model Number: SPDK bdev Controller 00:13:15.845 Firmware Version: 24.09 00:13:15.845 Recommended Arb Burst: 6 00:13:15.845 IEEE OUI Identifier: 8d 6b 50 00:13:15.845 Multi-path I/O 00:13:15.845 May have multiple subsystem ports: Yes 00:13:15.845 May have multiple controllers: Yes 00:13:15.845 Associated with SR-IOV VF: No 00:13:15.845 Max Data Transfer Size: 131072 00:13:15.845 Max Number of Namespaces: 32 00:13:15.845 Max Number of I/O Queues: 127 00:13:15.845 NVMe Specification Version (VS): 1.3 00:13:15.845 NVMe Specification Version (Identify): 1.3 00:13:15.845 Maximum Queue Entries: 256 00:13:15.845 Contiguous Queues Required: Yes 00:13:15.845 Arbitration Mechanisms Supported 00:13:15.845 Weighted Round Robin: Not Supported 00:13:15.845 Vendor Specific: Not Supported 00:13:15.845 Reset Timeout: 15000 ms 00:13:15.845 Doorbell Stride: 4 bytes 00:13:15.845 NVM Subsystem Reset: Not Supported 00:13:15.845 Command Sets Supported 00:13:15.845 NVM Command Set: Supported 00:13:15.845 Boot Partition: Not Supported 00:13:15.845 Memory Page Size Minimum: 4096 bytes 00:13:15.845 Memory Page Size Maximum: 4096 bytes 00:13:15.845 Persistent Memory Region: Not Supported 00:13:15.845 Optional Asynchronous Events Supported 00:13:15.845 Namespace Attribute Notices: Supported 00:13:15.845 Firmware Activation Notices: Not Supported 00:13:15.845 ANA Change Notices: Not Supported 00:13:15.845 PLE Aggregate Log Change Notices: Not Supported 00:13:15.845 LBA Status Info Alert Notices: Not Supported 00:13:15.845 EGE Aggregate Log Change Notices: Not Supported 00:13:15.845 Normal NVM Subsystem Shutdown event: Not Supported 00:13:15.845 Zone Descriptor Change Notices: Not Supported 00:13:15.845 Discovery Log Change Notices: Not Supported 00:13:15.845 Controller Attributes 00:13:15.845 128-bit Host Identifier: Supported 00:13:15.845 Non-Operational Permissive Mode: Not Supported 00:13:15.845 NVM Sets: Not Supported 00:13:15.845 Read Recovery Levels: Not Supported 00:13:15.845 Endurance Groups: Not Supported 00:13:15.845 Predictable Latency Mode: Not Supported 00:13:15.845 Traffic Based Keep ALive: Not Supported 00:13:15.845 Namespace Granularity: Not Supported 00:13:15.845 SQ Associations: Not Supported 00:13:15.845 UUID List: Not Supported 00:13:15.845 Multi-Domain Subsystem: Not Supported 00:13:15.845 Fixed Capacity Management: Not Supported 00:13:15.845 Variable Capacity Management: Not Supported 00:13:15.845 Delete Endurance Group: Not Supported 00:13:15.845 Delete NVM Set: Not Supported 00:13:15.845 Extended LBA Formats Supported: Not Supported 00:13:15.845 Flexible Data Placement Supported: Not Supported 00:13:15.845 00:13:15.845 Controller Memory Buffer Support 00:13:15.845 ================================ 00:13:15.845 Supported: No 00:13:15.845 00:13:15.845 Persistent Memory Region Support 00:13:15.845 ================================ 00:13:15.845 Supported: No 00:13:15.845 00:13:15.845 Admin Command Set Attributes 00:13:15.845 ============================ 00:13:15.845 Security Send/Receive: Not Supported 00:13:15.846 Format NVM: Not Supported 00:13:15.846 Firmware Activate/Download: Not Supported 00:13:15.846 Namespace Management: Not Supported 00:13:15.846 Device Self-Test: Not Supported 00:13:15.846 Directives: Not Supported 00:13:15.846 NVMe-MI: Not Supported 00:13:15.846 Virtualization Management: Not Supported 00:13:15.846 Doorbell Buffer Config: Not Supported 00:13:15.846 Get LBA Status Capability: Not Supported 00:13:15.846 Command & Feature Lockdown Capability: Not Supported 00:13:15.846 Abort Command Limit: 4 00:13:15.846 Async Event Request Limit: 4 00:13:15.846 Number of Firmware Slots: N/A 00:13:15.846 Firmware Slot 1 Read-Only: N/A 00:13:15.846 Firmware Activation Without Reset: N/A 00:13:15.846 Multiple Update Detection Support: N/A 00:13:15.846 Firmware Update Granularity: No Information Provided 00:13:15.846 Per-Namespace SMART Log: No 00:13:15.846 Asymmetric Namespace Access Log Page: Not Supported 00:13:15.846 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:15.846 Command Effects Log Page: Supported 00:13:15.846 Get Log Page Extended Data: Supported 00:13:15.846 Telemetry Log Pages: Not Supported 00:13:15.846 Persistent Event Log Pages: Not Supported 00:13:15.846 Supported Log Pages Log Page: May Support 00:13:15.846 Commands Supported & Effects Log Page: Not Supported 00:13:15.846 Feature Identifiers & Effects Log Page:May Support 00:13:15.846 NVMe-MI Commands & Effects Log Page: May Support 00:13:15.846 Data Area 4 for Telemetry Log: Not Supported 00:13:15.846 Error Log Page Entries Supported: 128 00:13:15.846 Keep Alive: Supported 00:13:15.846 Keep Alive Granularity: 10000 ms 00:13:15.846 00:13:15.846 NVM Command Set Attributes 00:13:15.846 ========================== 00:13:15.846 Submission Queue Entry Size 00:13:15.846 Max: 64 00:13:15.846 Min: 64 00:13:15.846 Completion Queue Entry Size 00:13:15.846 Max: 16 00:13:15.846 Min: 16 00:13:15.846 Number of Namespaces: 32 00:13:15.846 Compare Command: Supported 00:13:15.846 Write Uncorrectable Command: Not Supported 00:13:15.846 Dataset Management Command: Supported 00:13:15.846 Write Zeroes Command: Supported 00:13:15.846 Set Features Save Field: Not Supported 00:13:15.846 Reservations: Not Supported 00:13:15.846 Timestamp: Not Supported 00:13:15.846 Copy: Supported 00:13:15.846 Volatile Write Cache: Present 00:13:15.846 Atomic Write Unit (Normal): 1 00:13:15.846 Atomic Write Unit (PFail): 1 00:13:15.846 Atomic Compare & Write Unit: 1 00:13:15.846 Fused Compare & Write: Supported 00:13:15.846 Scatter-Gather List 00:13:15.846 SGL Command Set: Supported (Dword aligned) 00:13:15.846 SGL Keyed: Not Supported 00:13:15.846 SGL Bit Bucket Descriptor: Not Supported 00:13:15.846 SGL Metadata Pointer: Not Supported 00:13:15.846 Oversized SGL: Not Supported 00:13:15.846 SGL Metadata Address: Not Supported 00:13:15.846 SGL Offset: Not Supported 00:13:15.846 Transport SGL Data Block: Not Supported 00:13:15.846 Replay Protected Memory Block: Not Supported 00:13:15.846 00:13:15.846 Firmware Slot Information 00:13:15.846 ========================= 00:13:15.846 Active slot: 1 00:13:15.846 Slot 1 Firmware Revision: 24.09 00:13:15.846 00:13:15.846 00:13:15.846 Commands Supported and Effects 00:13:15.846 ============================== 00:13:15.846 Admin Commands 00:13:15.846 -------------- 00:13:15.846 Get Log Page (02h): Supported 00:13:15.846 Identify (06h): Supported 00:13:15.846 Abort (08h): Supported 00:13:15.846 Set Features (09h): Supported 00:13:15.846 Get Features (0Ah): Supported 00:13:15.846 Asynchronous Event Request (0Ch): Supported 00:13:15.846 Keep Alive (18h): Supported 00:13:15.846 I/O Commands 00:13:15.846 ------------ 00:13:15.846 Flush (00h): Supported LBA-Change 00:13:15.846 Write (01h): Supported LBA-Change 00:13:15.846 Read (02h): Supported 00:13:15.846 Compare (05h): Supported 00:13:15.846 Write Zeroes (08h): Supported LBA-Change 00:13:15.846 Dataset Management (09h): Supported LBA-Change 00:13:15.846 Copy (19h): Supported LBA-Change 00:13:15.846 00:13:15.846 Error Log 00:13:15.846 ========= 00:13:15.846 00:13:15.846 Arbitration 00:13:15.846 =========== 00:13:15.846 Arbitration Burst: 1 00:13:15.846 00:13:15.846 Power Management 00:13:15.846 ================ 00:13:15.846 Number of Power States: 1 00:13:15.846 Current Power State: Power State #0 00:13:15.846 Power State #0: 00:13:15.846 Max Power: 0.00 W 00:13:15.846 Non-Operational State: Operational 00:13:15.846 Entry Latency: Not Reported 00:13:15.846 Exit Latency: Not Reported 00:13:15.846 Relative Read Throughput: 0 00:13:15.846 Relative Read Latency: 0 00:13:15.846 Relative Write Throughput: 0 00:13:15.846 Relative Write Latency: 0 00:13:15.846 Idle Power: Not Reported 00:13:15.846 Active Power: Not Reported 00:13:15.846 Non-Operational Permissive Mode: Not Supported 00:13:15.846 00:13:15.846 Health Information 00:13:15.846 ================== 00:13:15.846 Critical Warnings: 00:13:15.846 Available Spare Space: OK 00:13:15.846 Temperature: OK 00:13:15.846 Device Reliability: OK 00:13:15.846 Read Only: No 00:13:15.846 Volatile Memory Backup: OK 00:13:15.846 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:15.846 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:15.846 Available Spare: 0% 00:13:15.846 Available Sp[2024-07-15 11:24:44.398256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:15.846 [2024-07-15 11:24:44.406128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:15.846 [2024-07-15 11:24:44.406160] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:15.846 [2024-07-15 11:24:44.406169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.846 [2024-07-15 11:24:44.406175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.846 [2024-07-15 11:24:44.406184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.846 [2024-07-15 11:24:44.406190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.846 [2024-07-15 11:24:44.406238] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:15.846 [2024-07-15 11:24:44.406249] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:15.846 [2024-07-15 11:24:44.407246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.846 [2024-07-15 11:24:44.407294] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:15.846 [2024-07-15 11:24:44.407301] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:15.846 [2024-07-15 11:24:44.408254] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:15.846 [2024-07-15 11:24:44.408266] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:15.846 [2024-07-15 11:24:44.408312] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:15.846 [2024-07-15 11:24:44.411130] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:15.846 are Threshold: 0% 00:13:15.846 Life Percentage Used: 0% 00:13:15.846 Data Units Read: 0 00:13:15.846 Data Units Written: 0 00:13:15.846 Host Read Commands: 0 00:13:15.846 Host Write Commands: 0 00:13:15.846 Controller Busy Time: 0 minutes 00:13:15.846 Power Cycles: 0 00:13:15.846 Power On Hours: 0 hours 00:13:15.846 Unsafe Shutdowns: 0 00:13:15.846 Unrecoverable Media Errors: 0 00:13:15.846 Lifetime Error Log Entries: 0 00:13:15.846 Warning Temperature Time: 0 minutes 00:13:15.846 Critical Temperature Time: 0 minutes 00:13:15.846 00:13:15.846 Number of Queues 00:13:15.846 ================ 00:13:15.846 Number of I/O Submission Queues: 127 00:13:15.846 Number of I/O Completion Queues: 127 00:13:15.846 00:13:15.846 Active Namespaces 00:13:15.846 ================= 00:13:15.846 Namespace ID:1 00:13:15.846 Error Recovery Timeout: Unlimited 00:13:15.846 Command Set Identifier: NVM (00h) 00:13:15.846 Deallocate: Supported 00:13:15.846 Deallocated/Unwritten Error: Not Supported 00:13:15.846 Deallocated Read Value: Unknown 00:13:15.846 Deallocate in Write Zeroes: Not Supported 00:13:15.846 Deallocated Guard Field: 0xFFFF 00:13:15.846 Flush: Supported 00:13:15.846 Reservation: Supported 00:13:15.846 Namespace Sharing Capabilities: Multiple Controllers 00:13:15.846 Size (in LBAs): 131072 (0GiB) 00:13:15.846 Capacity (in LBAs): 131072 (0GiB) 00:13:15.846 Utilization (in LBAs): 131072 (0GiB) 00:13:15.846 NGUID: 3DB7193A6950469D91F27F758E4899DB 00:13:15.846 UUID: 3db7193a-6950-469d-91f2-7f758e4899db 00:13:15.846 Thin Provisioning: Not Supported 00:13:15.846 Per-NS Atomic Units: Yes 00:13:15.846 Atomic Boundary Size (Normal): 0 00:13:15.846 Atomic Boundary Size (PFail): 0 00:13:15.846 Atomic Boundary Offset: 0 00:13:15.846 Maximum Single Source Range Length: 65535 00:13:15.846 Maximum Copy Length: 65535 00:13:15.846 Maximum Source Range Count: 1 00:13:15.846 NGUID/EUI64 Never Reused: No 00:13:15.846 Namespace Write Protected: No 00:13:15.846 Number of LBA Formats: 1 00:13:15.846 Current LBA Format: LBA Format #00 00:13:15.846 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:15.846 00:13:15.846 11:24:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:15.846 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.105 [2024-07-15 11:24:44.595155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.387 Initializing NVMe Controllers 00:13:21.387 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:21.387 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:21.387 Initialization complete. Launching workers. 00:13:21.387 ======================================================== 00:13:21.387 Latency(us) 00:13:21.387 Device Information : IOPS MiB/s Average min max 00:13:21.387 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39980.00 156.17 3203.98 825.30 6867.83 00:13:21.387 ======================================================== 00:13:21.387 Total : 39980.00 156.17 3203.98 825.30 6867.83 00:13:21.387 00:13:21.387 [2024-07-15 11:24:49.701295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.387 11:24:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:21.387 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.387 [2024-07-15 11:24:49.881848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.674 Initializing NVMe Controllers 00:13:26.674 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:26.675 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:26.675 Initialization complete. Launching workers. 00:13:26.675 ======================================================== 00:13:26.675 Latency(us) 00:13:26.675 Device Information : IOPS MiB/s Average min max 00:13:26.675 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35895.64 140.22 3565.49 1099.32 7578.36 00:13:26.675 ======================================================== 00:13:26.675 Total : 35895.64 140.22 3565.49 1099.32 7578.36 00:13:26.675 00:13:26.675 [2024-07-15 11:24:54.903903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.675 11:24:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:26.675 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.675 [2024-07-15 11:24:55.093497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:31.959 [2024-07-15 11:25:00.237208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:31.959 Initializing NVMe Controllers 00:13:31.959 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:31.959 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:31.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:31.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:31.959 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:31.959 Initialization complete. Launching workers. 00:13:31.959 Starting thread on core 2 00:13:31.959 Starting thread on core 3 00:13:31.959 Starting thread on core 1 00:13:31.959 11:25:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:31.959 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.959 [2024-07-15 11:25:00.493525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:35.304 [2024-07-15 11:25:03.545547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:35.304 Initializing NVMe Controllers 00:13:35.304 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:35.304 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:35.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:35.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:35.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:35.304 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:35.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:35.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:35.304 Initialization complete. Launching workers. 00:13:35.304 Starting thread on core 1 with urgent priority queue 00:13:35.304 Starting thread on core 2 with urgent priority queue 00:13:35.304 Starting thread on core 3 with urgent priority queue 00:13:35.304 Starting thread on core 0 with urgent priority queue 00:13:35.304 SPDK bdev Controller (SPDK2 ) core 0: 14780.00 IO/s 6.77 secs/100000 ios 00:13:35.304 SPDK bdev Controller (SPDK2 ) core 1: 13979.67 IO/s 7.15 secs/100000 ios 00:13:35.304 SPDK bdev Controller (SPDK2 ) core 2: 16161.00 IO/s 6.19 secs/100000 ios 00:13:35.304 SPDK bdev Controller (SPDK2 ) core 3: 10435.67 IO/s 9.58 secs/100000 ios 00:13:35.304 ======================================================== 00:13:35.304 00:13:35.304 11:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:35.304 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.304 [2024-07-15 11:25:03.816586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:35.304 Initializing NVMe Controllers 00:13:35.304 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:35.304 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:35.304 Namespace ID: 1 size: 0GB 00:13:35.304 Initialization complete. 00:13:35.304 INFO: using host memory buffer for IO 00:13:35.304 Hello world! 00:13:35.304 [2024-07-15 11:25:03.824643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:35.304 11:25:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:35.304 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.564 [2024-07-15 11:25:04.084423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:36.506 Initializing NVMe Controllers 00:13:36.506 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:36.506 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:36.506 Initialization complete. Launching workers. 00:13:36.506 submit (in ns) avg, min, max = 7986.4, 3937.5, 4001449.2 00:13:36.506 complete (in ns) avg, min, max = 16181.8, 2390.8, 4000665.8 00:13:36.506 00:13:36.506 Submit histogram 00:13:36.506 ================ 00:13:36.506 Range in us Cumulative Count 00:13:36.506 3.920 - 3.947: 0.2303% ( 45) 00:13:36.506 3.947 - 3.973: 2.6763% ( 478) 00:13:36.506 3.973 - 4.000: 10.9917% ( 1625) 00:13:36.506 4.000 - 4.027: 20.7553% ( 1908) 00:13:36.506 4.027 - 4.053: 31.7572% ( 2150) 00:13:36.506 4.053 - 4.080: 42.2168% ( 2044) 00:13:36.506 4.080 - 4.107: 54.2780% ( 2357) 00:13:36.506 4.107 - 4.133: 71.0726% ( 3282) 00:13:36.506 4.133 - 4.160: 85.2523% ( 2771) 00:13:36.506 4.160 - 4.187: 93.5677% ( 1625) 00:13:36.506 4.187 - 4.213: 97.4823% ( 765) 00:13:36.506 4.213 - 4.240: 98.9049% ( 278) 00:13:36.506 4.240 - 4.267: 99.3655% ( 90) 00:13:36.506 4.267 - 4.293: 99.4627% ( 19) 00:13:36.506 4.293 - 4.320: 99.5036% ( 8) 00:13:36.506 4.320 - 4.347: 99.5139% ( 2) 00:13:36.506 4.427 - 4.453: 99.5190% ( 1) 00:13:36.506 4.453 - 4.480: 99.5241% ( 1) 00:13:36.506 4.613 - 4.640: 99.5343% ( 2) 00:13:36.506 4.667 - 4.693: 99.5395% ( 1) 00:13:36.506 5.067 - 5.093: 99.5446% ( 1) 00:13:36.506 5.093 - 5.120: 99.5497% ( 1) 00:13:36.506 5.200 - 5.227: 99.5548% ( 1) 00:13:36.506 5.307 - 5.333: 99.5599% ( 1) 00:13:36.506 5.413 - 5.440: 99.5702% ( 2) 00:13:36.506 5.520 - 5.547: 99.5753% ( 1) 00:13:36.506 5.627 - 5.653: 99.5804% ( 1) 00:13:36.506 5.787 - 5.813: 99.5855% ( 1) 00:13:36.506 5.840 - 5.867: 99.5906% ( 1) 00:13:36.506 6.027 - 6.053: 99.5957% ( 1) 00:13:36.506 6.053 - 6.080: 99.6009% ( 1) 00:13:36.506 6.133 - 6.160: 99.6111% ( 2) 00:13:36.506 6.160 - 6.187: 99.6162% ( 1) 00:13:36.506 6.240 - 6.267: 99.6213% ( 1) 00:13:36.506 6.320 - 6.347: 99.6367% ( 3) 00:13:36.506 6.347 - 6.373: 99.6469% ( 2) 00:13:36.506 6.373 - 6.400: 99.6520% ( 1) 00:13:36.507 6.427 - 6.453: 99.6571% ( 1) 00:13:36.507 6.453 - 6.480: 99.6674% ( 2) 00:13:36.507 6.480 - 6.507: 99.6725% ( 1) 00:13:36.507 6.507 - 6.533: 99.6827% ( 2) 00:13:36.507 6.533 - 6.560: 99.6879% ( 1) 00:13:36.507 6.587 - 6.613: 99.6930% ( 1) 00:13:36.507 6.613 - 6.640: 99.6981% ( 1) 00:13:36.507 6.827 - 6.880: 99.7032% ( 1) 00:13:36.507 6.933 - 6.987: 99.7134% ( 2) 00:13:36.507 6.987 - 7.040: 99.7186% ( 1) 00:13:36.507 7.200 - 7.253: 99.7237% ( 1) 00:13:36.507 7.307 - 7.360: 99.7288% ( 1) 00:13:36.507 7.360 - 7.413: 99.7339% ( 1) 00:13:36.507 7.413 - 7.467: 99.7390% ( 1) 00:13:36.507 7.573 - 7.627: 99.7441% ( 1) 00:13:36.507 7.627 - 7.680: 99.7544% ( 2) 00:13:36.507 7.787 - 7.840: 99.7595% ( 1) 00:13:36.507 7.840 - 7.893: 99.7646% ( 1) 00:13:36.507 7.893 - 7.947: 99.7697% ( 1) 00:13:36.507 8.000 - 8.053: 99.7800% ( 2) 00:13:36.507 8.053 - 8.107: 99.8004% ( 4) 00:13:36.507 8.107 - 8.160: 99.8107% ( 2) 00:13:36.507 8.160 - 8.213: 99.8209% ( 2) 00:13:36.507 8.480 - 8.533: 99.8260% ( 1) 00:13:36.507 8.533 - 8.587: 99.8363% ( 2) 00:13:36.507 8.587 - 8.640: 99.8414% ( 1) 00:13:36.507 8.693 - 8.747: 99.8465% ( 1) 00:13:36.507 8.747 - 8.800: 99.8567% ( 2) 00:13:36.507 8.853 - 8.907: 99.8670% ( 2) 00:13:36.507 8.907 - 8.960: 99.8721% ( 1) 00:13:36.507 9.067 - 9.120: 99.8772% ( 1) 00:13:36.507 9.547 - 9.600: 99.8823% ( 1) 00:13:36.507 12.480 - 12.533: 99.8874% ( 1) 00:13:36.507 12.693 - 12.747: 99.8925% ( 1) 00:13:36.507 12.853 - 12.907: 99.8977% ( 1) 00:13:36.507 13.280 - 13.333: 99.9028% ( 1) 00:13:36.507 3986.773 - 4014.080: 100.0000% ( 19) 00:13:36.507 00:13:36.507 Complete histogram 00:13:36.507 ================== 00:13:36.507 Range in us Cumulative Count 00:13:36.507 2.387 - 2.400: 0.0051% ( 1) 00:13:36.507 2.400 - 2.413: 0.5066% ( 98) 00:13:36.507 2.413 - 2.427: 0.7420% ( 46) 00:13:36.507 2.427 - [2024-07-15 11:25:05.179816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:36.769 2.440: 0.8904% ( 29) 00:13:36.769 2.440 - 2.453: 1.0337% ( 28) 00:13:36.769 2.453 - 2.467: 1.0900% ( 11) 00:13:36.769 2.467 - 2.480: 38.9981% ( 7408) 00:13:36.769 2.480 - 2.493: 55.9871% ( 3320) 00:13:36.769 2.493 - 2.507: 68.6675% ( 2478) 00:13:36.769 2.507 - 2.520: 78.2776% ( 1878) 00:13:36.769 2.520 - 2.533: 81.6395% ( 657) 00:13:36.769 2.533 - 2.547: 84.3517% ( 530) 00:13:36.769 2.547 - 2.560: 89.1158% ( 931) 00:13:36.769 2.560 - 2.573: 94.4274% ( 1038) 00:13:36.769 2.573 - 2.587: 96.7455% ( 453) 00:13:36.769 2.587 - 2.600: 98.3471% ( 313) 00:13:36.769 2.600 - 2.613: 99.1864% ( 164) 00:13:36.769 2.613 - 2.627: 99.3655% ( 35) 00:13:36.769 2.627 - 2.640: 99.3859% ( 4) 00:13:36.769 2.640 - 2.653: 99.3962% ( 2) 00:13:36.769 2.653 - 2.667: 99.4013% ( 1) 00:13:36.769 2.667 - 2.680: 99.4064% ( 1) 00:13:36.769 2.800 - 2.813: 99.4115% ( 1) 00:13:36.769 3.027 - 3.040: 99.4166% ( 1) 00:13:36.769 4.373 - 4.400: 99.4218% ( 1) 00:13:36.769 4.533 - 4.560: 99.4269% ( 1) 00:13:36.769 4.667 - 4.693: 99.4371% ( 2) 00:13:36.769 4.693 - 4.720: 99.4422% ( 1) 00:13:36.769 4.747 - 4.773: 99.4473% ( 1) 00:13:36.769 4.800 - 4.827: 99.4576% ( 2) 00:13:36.769 4.827 - 4.853: 99.4627% ( 1) 00:13:36.769 4.853 - 4.880: 99.4729% ( 2) 00:13:36.769 4.960 - 4.987: 99.4780% ( 1) 00:13:36.769 5.093 - 5.120: 99.4832% ( 1) 00:13:36.769 5.120 - 5.147: 99.4883% ( 1) 00:13:36.769 5.173 - 5.200: 99.4934% ( 1) 00:13:36.769 5.200 - 5.227: 99.4985% ( 1) 00:13:36.769 5.360 - 5.387: 99.5036% ( 1) 00:13:36.769 5.467 - 5.493: 99.5088% ( 1) 00:13:36.769 5.493 - 5.520: 99.5139% ( 1) 00:13:36.769 5.600 - 5.627: 99.5190% ( 1) 00:13:36.769 5.813 - 5.840: 99.5292% ( 2) 00:13:36.769 5.867 - 5.893: 99.5343% ( 1) 00:13:36.769 5.893 - 5.920: 99.5395% ( 1) 00:13:36.769 5.947 - 5.973: 99.5446% ( 1) 00:13:36.769 6.053 - 6.080: 99.5497% ( 1) 00:13:36.769 6.133 - 6.160: 99.5548% ( 1) 00:13:36.769 6.373 - 6.400: 99.5599% ( 1) 00:13:36.769 6.453 - 6.480: 99.5650% ( 1) 00:13:36.769 6.507 - 6.533: 99.5702% ( 1) 00:13:36.769 6.720 - 6.747: 99.5753% ( 1) 00:13:36.769 6.800 - 6.827: 99.5855% ( 2) 00:13:36.769 6.827 - 6.880: 99.5957% ( 2) 00:13:36.769 6.880 - 6.933: 99.6009% ( 1) 00:13:36.769 6.987 - 7.040: 99.6060% ( 1) 00:13:36.769 7.093 - 7.147: 99.6111% ( 1) 00:13:36.769 7.147 - 7.200: 99.6162% ( 1) 00:13:36.769 7.253 - 7.307: 99.6213% ( 1) 00:13:36.769 7.680 - 7.733: 99.6264% ( 1) 00:13:36.769 7.733 - 7.787: 99.6316% ( 1) 00:13:36.769 8.213 - 8.267: 99.6367% ( 1) 00:13:36.769 8.320 - 8.373: 99.6418% ( 1) 00:13:36.769 15.253 - 15.360: 99.6469% ( 1) 00:13:36.769 34.987 - 35.200: 99.6520% ( 1) 00:13:36.769 95.147 - 95.573: 99.6571% ( 1) 00:13:36.769 3372.373 - 3386.027: 99.6623% ( 1) 00:13:36.769 3986.773 - 4014.080: 100.0000% ( 66) 00:13:36.769 00:13:36.769 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:36.769 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:36.769 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:36.769 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:36.769 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:36.769 [ 00:13:36.769 { 00:13:36.769 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:36.769 "subtype": "Discovery", 00:13:36.769 "listen_addresses": [], 00:13:36.769 "allow_any_host": true, 00:13:36.769 "hosts": [] 00:13:36.769 }, 00:13:36.769 { 00:13:36.769 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:36.769 "subtype": "NVMe", 00:13:36.769 "listen_addresses": [ 00:13:36.769 { 00:13:36.769 "trtype": "VFIOUSER", 00:13:36.769 "adrfam": "IPv4", 00:13:36.769 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:36.769 "trsvcid": "0" 00:13:36.769 } 00:13:36.769 ], 00:13:36.769 "allow_any_host": true, 00:13:36.769 "hosts": [], 00:13:36.769 "serial_number": "SPDK1", 00:13:36.769 "model_number": "SPDK bdev Controller", 00:13:36.769 "max_namespaces": 32, 00:13:36.769 "min_cntlid": 1, 00:13:36.769 "max_cntlid": 65519, 00:13:36.769 "namespaces": [ 00:13:36.769 { 00:13:36.769 "nsid": 1, 00:13:36.769 "bdev_name": "Malloc1", 00:13:36.769 "name": "Malloc1", 00:13:36.769 "nguid": "A9494A85FCED4CFCB40560656C88D8E7", 00:13:36.769 "uuid": "a9494a85-fced-4cfc-b405-60656c88d8e7" 00:13:36.769 }, 00:13:36.769 { 00:13:36.769 "nsid": 2, 00:13:36.769 "bdev_name": "Malloc3", 00:13:36.769 "name": "Malloc3", 00:13:36.769 "nguid": "DAB3296BB61942288C19FBE7C49990C0", 00:13:36.769 "uuid": "dab3296b-b619-4228-8c19-fbe7c49990c0" 00:13:36.769 } 00:13:36.769 ] 00:13:36.769 }, 00:13:36.770 { 00:13:36.770 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:36.770 "subtype": "NVMe", 00:13:36.770 "listen_addresses": [ 00:13:36.770 { 00:13:36.770 "trtype": "VFIOUSER", 00:13:36.770 "adrfam": "IPv4", 00:13:36.770 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:36.770 "trsvcid": "0" 00:13:36.770 } 00:13:36.770 ], 00:13:36.770 "allow_any_host": true, 00:13:36.770 "hosts": [], 00:13:36.770 "serial_number": "SPDK2", 00:13:36.770 "model_number": "SPDK bdev Controller", 00:13:36.770 "max_namespaces": 32, 00:13:36.770 "min_cntlid": 1, 00:13:36.770 "max_cntlid": 65519, 00:13:36.770 "namespaces": [ 00:13:36.770 { 00:13:36.770 "nsid": 1, 00:13:36.770 "bdev_name": "Malloc2", 00:13:36.770 "name": "Malloc2", 00:13:36.770 "nguid": "3DB7193A6950469D91F27F758E4899DB", 00:13:36.770 "uuid": "3db7193a-6950-469d-91f2-7f758e4899db" 00:13:36.770 } 00:13:36.770 ] 00:13:36.770 } 00:13:36.770 ] 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3463661 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:36.770 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:36.770 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.032 [2024-07-15 11:25:05.554960] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:37.032 Malloc4 00:13:37.033 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:37.033 [2024-07-15 11:25:05.718064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:37.295 Asynchronous Event Request test 00:13:37.295 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.295 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.295 Registering asynchronous event callbacks... 00:13:37.295 Starting namespace attribute notice tests for all controllers... 00:13:37.295 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:37.295 aer_cb - Changed Namespace 00:13:37.295 Cleaning up... 00:13:37.295 [ 00:13:37.295 { 00:13:37.295 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:37.295 "subtype": "Discovery", 00:13:37.295 "listen_addresses": [], 00:13:37.295 "allow_any_host": true, 00:13:37.295 "hosts": [] 00:13:37.295 }, 00:13:37.295 { 00:13:37.295 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:37.295 "subtype": "NVMe", 00:13:37.295 "listen_addresses": [ 00:13:37.295 { 00:13:37.295 "trtype": "VFIOUSER", 00:13:37.295 "adrfam": "IPv4", 00:13:37.295 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:37.295 "trsvcid": "0" 00:13:37.295 } 00:13:37.295 ], 00:13:37.295 "allow_any_host": true, 00:13:37.295 "hosts": [], 00:13:37.295 "serial_number": "SPDK1", 00:13:37.295 "model_number": "SPDK bdev Controller", 00:13:37.295 "max_namespaces": 32, 00:13:37.295 "min_cntlid": 1, 00:13:37.295 "max_cntlid": 65519, 00:13:37.295 "namespaces": [ 00:13:37.295 { 00:13:37.295 "nsid": 1, 00:13:37.295 "bdev_name": "Malloc1", 00:13:37.295 "name": "Malloc1", 00:13:37.295 "nguid": "A9494A85FCED4CFCB40560656C88D8E7", 00:13:37.295 "uuid": "a9494a85-fced-4cfc-b405-60656c88d8e7" 00:13:37.295 }, 00:13:37.295 { 00:13:37.295 "nsid": 2, 00:13:37.295 "bdev_name": "Malloc3", 00:13:37.295 "name": "Malloc3", 00:13:37.295 "nguid": "DAB3296BB61942288C19FBE7C49990C0", 00:13:37.295 "uuid": "dab3296b-b619-4228-8c19-fbe7c49990c0" 00:13:37.295 } 00:13:37.295 ] 00:13:37.295 }, 00:13:37.295 { 00:13:37.295 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:37.295 "subtype": "NVMe", 00:13:37.295 "listen_addresses": [ 00:13:37.295 { 00:13:37.295 "trtype": "VFIOUSER", 00:13:37.295 "adrfam": "IPv4", 00:13:37.295 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:37.295 "trsvcid": "0" 00:13:37.295 } 00:13:37.295 ], 00:13:37.295 "allow_any_host": true, 00:13:37.295 "hosts": [], 00:13:37.295 "serial_number": "SPDK2", 00:13:37.295 "model_number": "SPDK bdev Controller", 00:13:37.295 "max_namespaces": 32, 00:13:37.295 "min_cntlid": 1, 00:13:37.295 "max_cntlid": 65519, 00:13:37.295 "namespaces": [ 00:13:37.295 { 00:13:37.295 "nsid": 1, 00:13:37.295 "bdev_name": "Malloc2", 00:13:37.295 "name": "Malloc2", 00:13:37.295 "nguid": "3DB7193A6950469D91F27F758E4899DB", 00:13:37.295 "uuid": "3db7193a-6950-469d-91f2-7f758e4899db" 00:13:37.295 }, 00:13:37.295 { 00:13:37.295 "nsid": 2, 00:13:37.295 "bdev_name": "Malloc4", 00:13:37.295 "name": "Malloc4", 00:13:37.295 "nguid": "5289A85CFEF44618AE5480EADC65FE2C", 00:13:37.295 "uuid": "5289a85c-fef4-4618-ae54-80eadc65fe2c" 00:13:37.295 } 00:13:37.295 ] 00:13:37.295 } 00:13:37.295 ] 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3463661 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3454551 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3454551 ']' 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3454551 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3454551 00:13:37.295 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.296 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.296 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3454551' 00:13:37.296 killing process with pid 3454551 00:13:37.296 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3454551 00:13:37.296 11:25:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3454551 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3463974 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3463974' 00:13:37.561 Process pid: 3463974 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3463974 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3463974 ']' 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.561 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:37.561 [2024-07-15 11:25:06.205828] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:37.561 [2024-07-15 11:25:06.206741] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:13:37.561 [2024-07-15 11:25:06.206787] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.561 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.822 [2024-07-15 11:25:06.266984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.822 [2024-07-15 11:25:06.331996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.822 [2024-07-15 11:25:06.332034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.822 [2024-07-15 11:25:06.332041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.822 [2024-07-15 11:25:06.332048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.822 [2024-07-15 11:25:06.332053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.822 [2024-07-15 11:25:06.332153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.822 [2024-07-15 11:25:06.332234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.822 [2024-07-15 11:25:06.332389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.822 [2024-07-15 11:25:06.332390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.822 [2024-07-15 11:25:06.400258] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:37.822 [2024-07-15 11:25:06.400324] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:37.822 [2024-07-15 11:25:06.401327] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:37.822 [2024-07-15 11:25:06.401675] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:37.822 [2024-07-15 11:25:06.401772] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:38.392 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.392 11:25:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:38.392 11:25:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:39.336 11:25:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:39.596 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:39.596 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:39.596 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:39.596 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:39.596 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:39.857 Malloc1 00:13:39.857 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:39.857 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:40.117 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:40.379 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:40.379 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:40.379 11:25:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:40.379 Malloc2 00:13:40.379 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:40.640 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3463974 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3463974 ']' 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3463974 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3463974 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3463974' 00:13:40.901 killing process with pid 3463974 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3463974 00:13:40.901 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3463974 00:13:41.162 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:41.162 11:25:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:41.162 00:13:41.162 real 0m50.524s 00:13:41.162 user 3m20.179s 00:13:41.162 sys 0m3.036s 00:13:41.162 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.162 11:25:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:41.162 ************************************ 00:13:41.162 END TEST nvmf_vfio_user 00:13:41.162 ************************************ 00:13:41.162 11:25:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:41.162 11:25:09 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:41.162 11:25:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:41.162 11:25:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.162 11:25:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.162 ************************************ 00:13:41.162 START TEST nvmf_vfio_user_nvme_compliance 00:13:41.162 ************************************ 00:13:41.162 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:41.422 * Looking for test storage... 00:13:41.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:41.422 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3464740 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3464740' 00:13:41.423 Process pid: 3464740 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3464740 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3464740 ']' 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.423 11:25:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:41.423 [2024-07-15 11:25:10.009651] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:13:41.423 [2024-07-15 11:25:10.009720] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.423 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.423 [2024-07-15 11:25:10.074879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.683 [2024-07-15 11:25:10.151049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.683 [2024-07-15 11:25:10.151086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.683 [2024-07-15 11:25:10.151094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.683 [2024-07-15 11:25:10.151100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.683 [2024-07-15 11:25:10.151106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.683 [2024-07-15 11:25:10.151181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.683 [2024-07-15 11:25:10.151458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.683 [2024-07-15 11:25:10.151462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.254 11:25:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.254 11:25:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:42.254 11:25:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 malloc0 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.194 11:25:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:43.455 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.455 00:13:43.455 00:13:43.455 CUnit - A unit testing framework for C - Version 2.1-3 00:13:43.455 http://cunit.sourceforge.net/ 00:13:43.455 00:13:43.455 00:13:43.455 Suite: nvme_compliance 00:13:43.455 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 11:25:12.034565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.455 [2024-07-15 11:25:12.035906] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:43.455 [2024-07-15 11:25:12.035918] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:43.455 [2024-07-15 11:25:12.035922] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:43.455 [2024-07-15 11:25:12.037583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.455 passed 00:13:43.455 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 11:25:12.134170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.455 [2024-07-15 11:25:12.137184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.715 passed 00:13:43.715 Test: admin_identify_ns ...[2024-07-15 11:25:12.232355] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.715 [2024-07-15 11:25:12.292130] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:43.715 [2024-07-15 11:25:12.300131] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:43.715 [2024-07-15 11:25:12.321242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.715 passed 00:13:43.715 Test: admin_get_features_mandatory_features ...[2024-07-15 11:25:12.415250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.975 [2024-07-15 11:25:12.418260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.976 passed 00:13:43.976 Test: admin_get_features_optional_features ...[2024-07-15 11:25:12.511808] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.976 [2024-07-15 11:25:12.515840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.976 passed 00:13:43.976 Test: admin_set_features_number_of_queues ...[2024-07-15 11:25:12.607972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.236 [2024-07-15 11:25:12.713233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.236 passed 00:13:44.236 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 11:25:12.805889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.236 [2024-07-15 11:25:12.808910] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.236 passed 00:13:44.236 Test: admin_get_log_page_with_lpo ...[2024-07-15 11:25:12.902044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.497 [2024-07-15 11:25:12.967136] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:44.497 [2024-07-15 11:25:12.980175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.497 passed 00:13:44.497 Test: fabric_property_get ...[2024-07-15 11:25:13.074240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.497 [2024-07-15 11:25:13.075479] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:44.497 [2024-07-15 11:25:13.077256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.497 passed 00:13:44.497 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 11:25:13.170781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.497 [2024-07-15 11:25:13.172063] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:44.497 [2024-07-15 11:25:13.174814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.756 passed 00:13:44.756 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 11:25:13.266946] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.756 [2024-07-15 11:25:13.350130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:44.756 [2024-07-15 11:25:13.366130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:44.756 [2024-07-15 11:25:13.371212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.756 passed 00:13:45.017 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 11:25:13.469451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.017 [2024-07-15 11:25:13.470698] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:45.017 [2024-07-15 11:25:13.472476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.017 passed 00:13:45.017 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 11:25:13.566389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.017 [2024-07-15 11:25:13.642127] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:45.017 [2024-07-15 11:25:13.666130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:45.017 [2024-07-15 11:25:13.671219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.017 passed 00:13:45.278 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 11:25:13.771432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.278 [2024-07-15 11:25:13.772667] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:45.278 [2024-07-15 11:25:13.772687] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:45.278 [2024-07-15 11:25:13.774442] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.278 passed 00:13:45.278 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 11:25:13.866352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.278 [2024-07-15 11:25:13.958136] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:45.278 [2024-07-15 11:25:13.966130] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:45.278 [2024-07-15 11:25:13.974129] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:45.539 [2024-07-15 11:25:13.982128] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:45.539 [2024-07-15 11:25:14.011210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.539 passed 00:13:45.539 Test: admin_create_io_sq_verify_pc ...[2024-07-15 11:25:14.105883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.539 [2024-07-15 11:25:14.121140] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:45.539 [2024-07-15 11:25:14.139048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.539 passed 00:13:45.539 Test: admin_create_io_qp_max_qps ...[2024-07-15 11:25:14.238590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.923 [2024-07-15 11:25:15.343133] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:47.184 [2024-07-15 11:25:15.725631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.184 passed 00:13:47.184 Test: admin_create_io_sq_shared_cq ...[2024-07-15 11:25:15.819394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.443 [2024-07-15 11:25:15.951130] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:47.443 [2024-07-15 11:25:15.988187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.443 passed 00:13:47.443 00:13:47.444 Run Summary: Type Total Ran Passed Failed Inactive 00:13:47.444 suites 1 1 n/a 0 0 00:13:47.444 tests 18 18 18 0 0 00:13:47.444 asserts 360 360 360 0 n/a 00:13:47.444 00:13:47.444 Elapsed time = 1.658 seconds 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3464740 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3464740 ']' 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3464740 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3464740 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3464740' 00:13:47.444 killing process with pid 3464740 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3464740 00:13:47.444 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3464740 00:13:47.704 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:47.704 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:47.704 00:13:47.704 real 0m6.418s 00:13:47.704 user 0m18.372s 00:13:47.704 sys 0m0.440s 00:13:47.704 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.704 11:25:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:47.704 ************************************ 00:13:47.704 END TEST nvmf_vfio_user_nvme_compliance 00:13:47.704 ************************************ 00:13:47.704 11:25:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:47.704 11:25:16 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:47.704 11:25:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.704 11:25:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.704 11:25:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.704 ************************************ 00:13:47.704 START TEST nvmf_vfio_user_fuzz 00:13:47.704 ************************************ 00:13:47.704 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:47.965 * Looking for test storage... 00:13:47.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:47.965 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3466127 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3466127' 00:13:47.966 Process pid: 3466127 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3466127 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3466127 ']' 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.966 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:48.226 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.226 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:48.226 11:25:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.168 malloc0 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:49.168 11:25:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:21.335 Fuzzing completed. Shutting down the fuzz application 00:14:21.335 00:14:21.335 Dumping successful admin opcodes: 00:14:21.335 8, 9, 10, 24, 00:14:21.335 Dumping successful io opcodes: 00:14:21.335 0, 00:14:21.335 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1232431, total successful commands: 4839, random_seed: 2276411392 00:14:21.335 NS: 0x200003a1ef00 admin qp, Total commands completed: 154976, total successful commands: 1252, random_seed: 2638998720 00:14:21.335 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:21.335 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.335 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.335 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3466127 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3466127 ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3466127 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3466127 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3466127' 00:14:21.336 killing process with pid 3466127 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3466127 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3466127 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:21.336 00:14:21.336 real 0m33.132s 00:14:21.336 user 0m39.932s 00:14:21.336 sys 0m22.923s 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.336 11:25:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:21.336 ************************************ 00:14:21.336 END TEST nvmf_vfio_user_fuzz 00:14:21.336 ************************************ 00:14:21.336 11:25:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:21.336 11:25:49 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:21.336 11:25:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.336 11:25:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.336 11:25:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.336 ************************************ 00:14:21.336 START TEST nvmf_host_management 00:14:21.336 ************************************ 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:21.336 * Looking for test storage... 00:14:21.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.336 11:25:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.937 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:27.938 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:27.938 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:27.938 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:27.938 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.938 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.219 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.219 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.219 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:14:28.219 00:14:28.219 --- 10.0.0.2 ping statistics --- 00:14:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.220 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:14:28.220 00:14:28.220 --- 10.0.0.1 ping statistics --- 00:14:28.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.220 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3476118 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3476118 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3476118 ']' 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:28.220 11:25:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:28.220 [2024-07-15 11:25:56.860012] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:14:28.220 [2024-07-15 11:25:56.860058] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.220 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.480 [2024-07-15 11:25:56.942076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.480 [2024-07-15 11:25:57.021190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.480 [2024-07-15 11:25:57.021242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.480 [2024-07-15 11:25:57.021250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.480 [2024-07-15 11:25:57.021257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.480 [2024-07-15 11:25:57.021263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.480 [2024-07-15 11:25:57.021387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.480 [2024-07-15 11:25:57.021551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.480 [2024-07-15 11:25:57.021721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:28.480 [2024-07-15 11:25:57.021722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.051 [2024-07-15 11:25:57.661621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.051 Malloc0 00:14:29.051 [2024-07-15 11:25:57.724977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.051 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3476484 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3476484 /var/tmp/bdevperf.sock 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3476484 ']' 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:29.311 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:29.311 { 00:14:29.311 "params": { 00:14:29.311 "name": "Nvme$subsystem", 00:14:29.311 "trtype": "$TEST_TRANSPORT", 00:14:29.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:29.312 "adrfam": "ipv4", 00:14:29.312 "trsvcid": "$NVMF_PORT", 00:14:29.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:29.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:29.312 "hdgst": ${hdgst:-false}, 00:14:29.312 "ddgst": ${ddgst:-false} 00:14:29.312 }, 00:14:29.312 "method": "bdev_nvme_attach_controller" 00:14:29.312 } 00:14:29.312 EOF 00:14:29.312 )") 00:14:29.312 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:29.312 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:29.312 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:29.312 11:25:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:29.312 "params": { 00:14:29.312 "name": "Nvme0", 00:14:29.312 "trtype": "tcp", 00:14:29.312 "traddr": "10.0.0.2", 00:14:29.312 "adrfam": "ipv4", 00:14:29.312 "trsvcid": "4420", 00:14:29.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:29.312 "hdgst": false, 00:14:29.312 "ddgst": false 00:14:29.312 }, 00:14:29.312 "method": "bdev_nvme_attach_controller" 00:14:29.312 }' 00:14:29.312 [2024-07-15 11:25:57.824931] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:14:29.312 [2024-07-15 11:25:57.824983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476484 ] 00:14:29.312 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.312 [2024-07-15 11:25:57.884076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.312 [2024-07-15 11:25:57.948644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.571 Running I/O for 10 seconds... 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=583 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 583 -ge 100 ']' 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.143 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.143 [2024-07-15 11:25:58.663982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aee40 is same with the state(5) to be set 00:14:30.143 [2024-07-15 11:25:58.664833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.664989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.664998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.143 [2024-07-15 11:25:58.665331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.143 [2024-07-15 11:25:58.665338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.665977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.144 [2024-07-15 11:25:58.665985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.144 [2024-07-15 11:25:58.666036] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25224f0 was disconnected and freed. reset controller. 00:14:30.144 [2024-07-15 11:25:58.667239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:30.144 task offset: 86912 on job bdev=Nvme0n1 fails 00:14:30.144 00:14:30.144 Latency(us) 00:14:30.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.144 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:30.144 Job: Nvme0n1 ended in about 0.57 seconds with error 00:14:30.144 Verification LBA range: start 0x0 length 0x400 00:14:30.144 Nvme0n1 : 0.57 1120.55 70.03 111.36 0.00 50807.99 1733.97 43909.12 00:14:30.144 =================================================================================================================== 00:14:30.144 Total : 1120.55 70.03 111.36 0.00 50807.99 1733.97 43909.12 00:14:30.144 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.144 [2024-07-15 11:25:58.669240] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:30.144 [2024-07-15 11:25:58.669263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21113b0 (9): Bad file descriptor 00:14:30.144 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:30.145 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.145 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.145 11:25:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.145 11:25:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:30.145 [2024-07-15 11:25:58.730861] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3476484 00:14:31.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3476484) - No such process 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:31.087 { 00:14:31.087 "params": { 00:14:31.087 "name": "Nvme$subsystem", 00:14:31.087 "trtype": "$TEST_TRANSPORT", 00:14:31.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:31.087 "adrfam": "ipv4", 00:14:31.087 "trsvcid": "$NVMF_PORT", 00:14:31.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:31.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:31.087 "hdgst": ${hdgst:-false}, 00:14:31.087 "ddgst": ${ddgst:-false} 00:14:31.087 }, 00:14:31.087 "method": "bdev_nvme_attach_controller" 00:14:31.087 } 00:14:31.087 EOF 00:14:31.087 )") 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:31.087 11:25:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:31.087 "params": { 00:14:31.087 "name": "Nvme0", 00:14:31.087 "trtype": "tcp", 00:14:31.087 "traddr": "10.0.0.2", 00:14:31.087 "adrfam": "ipv4", 00:14:31.087 "trsvcid": "4420", 00:14:31.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:31.087 "hdgst": false, 00:14:31.087 "ddgst": false 00:14:31.087 }, 00:14:31.087 "method": "bdev_nvme_attach_controller" 00:14:31.087 }' 00:14:31.087 [2024-07-15 11:25:59.739741] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:14:31.087 [2024-07-15 11:25:59.739798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476840 ] 00:14:31.087 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.347 [2024-07-15 11:25:59.798303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.347 [2024-07-15 11:25:59.860734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.347 Running I/O for 1 seconds... 00:14:32.727 00:14:32.727 Latency(us) 00:14:32.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.727 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:32.727 Verification LBA range: start 0x0 length 0x400 00:14:32.727 Nvme0n1 : 1.04 1233.11 77.07 0.00 0.00 51096.69 8028.16 45219.84 00:14:32.727 =================================================================================================================== 00:14:32.727 Total : 1233.11 77.07 0.00 0.00 51096.69 8028.16 45219.84 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.727 rmmod nvme_tcp 00:14:32.727 rmmod nvme_fabrics 00:14:32.727 rmmod nvme_keyring 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3476118 ']' 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3476118 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3476118 ']' 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3476118 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3476118 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3476118' 00:14:32.727 killing process with pid 3476118 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3476118 00:14:32.727 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3476118 00:14:33.010 [2024-07-15 11:26:01.428931] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.010 11:26:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.920 11:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:34.920 11:26:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:34.920 00:14:34.920 real 0m13.997s 00:14:34.920 user 0m22.262s 00:14:34.920 sys 0m6.161s 00:14:34.920 11:26:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.920 11:26:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:34.920 ************************************ 00:14:34.920 END TEST nvmf_host_management 00:14:34.920 ************************************ 00:14:34.920 11:26:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:34.920 11:26:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:34.920 11:26:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:34.920 11:26:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.920 11:26:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.920 ************************************ 00:14:34.920 START TEST nvmf_lvol 00:14:34.920 ************************************ 00:14:34.920 11:26:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:35.181 * Looking for test storage... 00:14:35.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.181 11:26:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:43.326 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:43.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:43.327 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:43.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:43.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:14:43.327 00:14:43.327 --- 10.0.0.2 ping statistics --- 00:14:43.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.327 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:14:43.327 00:14:43.327 --- 10.0.0.1 ping statistics --- 00:14:43.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.327 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.327 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3481183 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3481183 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3481183 ']' 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.328 11:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:43.328 [2024-07-15 11:26:10.937795] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:14:43.328 [2024-07-15 11:26:10.937848] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.328 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.328 [2024-07-15 11:26:11.003434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:43.328 [2024-07-15 11:26:11.067553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.328 [2024-07-15 11:26:11.067592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.328 [2024-07-15 11:26:11.067600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.328 [2024-07-15 11:26:11.067607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.328 [2024-07-15 11:26:11.067612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.328 [2024-07-15 11:26:11.067748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.328 [2024-07-15 11:26:11.067871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.328 [2024-07-15 11:26:11.067874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:43.328 [2024-07-15 11:26:11.891964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.328 11:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.589 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:43.589 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.589 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:43.849 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:43.849 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:44.109 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=28a1ef88-b77a-48cf-bac3-03565df0bca4 00:14:44.109 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 28a1ef88-b77a-48cf-bac3-03565df0bca4 lvol 20 00:14:44.109 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=742116f7-ceac-4b69-be87-29da9c93b230 00:14:44.109 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:44.370 11:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 742116f7-ceac-4b69-be87-29da9c93b230 00:14:44.633 11:26:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:44.633 [2024-07-15 11:26:13.270309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.633 11:26:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:44.937 11:26:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3481856 00:14:44.937 11:26:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:44.937 11:26:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:44.937 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.879 11:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 742116f7-ceac-4b69-be87-29da9c93b230 MY_SNAPSHOT 00:14:46.140 11:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e652228b-0f13-4b8e-bb4a-9683d3d01387 00:14:46.140 11:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 742116f7-ceac-4b69-be87-29da9c93b230 30 00:14:46.140 11:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e652228b-0f13-4b8e-bb4a-9683d3d01387 MY_CLONE 00:14:46.401 11:26:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e1501e03-8622-4599-b81a-081c29e32d2d 00:14:46.401 11:26:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e1501e03-8622-4599-b81a-081c29e32d2d 00:14:46.662 11:26:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3481856 00:14:56.682 Initializing NVMe Controllers 00:14:56.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:56.682 Controller IO queue size 128, less than required. 00:14:56.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:56.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:56.682 Initialization complete. Launching workers. 00:14:56.682 ======================================================== 00:14:56.682 Latency(us) 00:14:56.682 Device Information : IOPS MiB/s Average min max 00:14:56.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12554.50 49.04 10198.88 1450.57 50822.39 00:14:56.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18281.40 71.41 7002.75 774.55 50984.39 00:14:56.682 ======================================================== 00:14:56.682 Total : 30835.89 120.45 8304.02 774.55 50984.39 00:14:56.682 00:14:56.682 11:26:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.682 11:26:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 742116f7-ceac-4b69-be87-29da9c93b230 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28a1ef88-b77a-48cf-bac3-03565df0bca4 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.682 rmmod nvme_tcp 00:14:56.682 rmmod nvme_fabrics 00:14:56.682 rmmod nvme_keyring 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3481183 ']' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3481183 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3481183 ']' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3481183 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3481183 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3481183' 00:14:56.682 killing process with pid 3481183 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3481183 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3481183 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.682 11:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.071 11:26:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.071 00:14:58.071 real 0m23.008s 00:14:58.071 user 1m3.305s 00:14:58.071 sys 0m7.737s 00:14:58.071 11:26:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.071 11:26:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 ************************************ 00:14:58.071 END TEST nvmf_lvol 00:14:58.071 ************************************ 00:14:58.071 11:26:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.071 11:26:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:58.071 11:26:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.071 11:26:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.071 11:26:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.071 ************************************ 00:14:58.071 START TEST nvmf_lvs_grow 00:14:58.071 ************************************ 00:14:58.071 11:26:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:58.332 * Looking for test storage... 00:14:58.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:58.332 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.333 11:26:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:04.920 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:04.920 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:04.920 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:04.920 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.920 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:15:05.180 00:15:05.180 --- 10.0.0.2 ping statistics --- 00:15:05.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.180 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:15:05.180 00:15:05.180 --- 10.0.0.1 ping statistics --- 00:15:05.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.180 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:05.180 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3488045 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3488045 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3488045 ']' 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.441 11:26:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:05.441 [2024-07-15 11:26:33.947062] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:05.441 [2024-07-15 11:26:33.947138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.441 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.441 [2024-07-15 11:26:34.019645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.441 [2024-07-15 11:26:34.093116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.441 [2024-07-15 11:26:34.093162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.441 [2024-07-15 11:26:34.093170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.441 [2024-07-15 11:26:34.093177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.441 [2024-07-15 11:26:34.093186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.441 [2024-07-15 11:26:34.093213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:06.383 [2024-07-15 11:26:34.905169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:06.383 ************************************ 00:15:06.383 START TEST lvs_grow_clean 00:15:06.383 ************************************ 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:06.383 11:26:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:06.643 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:06.643 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:06.903 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:06.903 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:06.903 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:06.903 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:06.903 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:06.903 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e2761196-f99e-4fc3-84ef-970acafcd5ae lvol 150 00:15:07.163 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d350140-39ad-4a8e-b9f9-5dde06dfbc99 00:15:07.163 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:07.163 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:07.163 [2024-07-15 11:26:35.805217] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:07.163 [2024-07-15 11:26:35.805271] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:07.163 true 00:15:07.163 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:07.163 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:07.422 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:07.422 11:26:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:07.681 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d350140-39ad-4a8e-b9f9-5dde06dfbc99 00:15:07.681 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:07.942 [2024-07-15 11:26:36.427104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.942 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:07.942 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3488605 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3488605 /var/tmp/bdevperf.sock 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3488605 ']' 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:07.943 11:26:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:08.203 [2024-07-15 11:26:36.651071] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:08.203 [2024-07-15 11:26:36.651126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488605 ] 00:15:08.203 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.203 [2024-07-15 11:26:36.725084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.203 [2024-07-15 11:26:36.789168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.773 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.773 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:08.773 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:09.033 Nvme0n1 00:15:09.033 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:09.293 [ 00:15:09.293 { 00:15:09.293 "name": "Nvme0n1", 00:15:09.293 "aliases": [ 00:15:09.293 "7d350140-39ad-4a8e-b9f9-5dde06dfbc99" 00:15:09.293 ], 00:15:09.293 "product_name": "NVMe disk", 00:15:09.293 "block_size": 4096, 00:15:09.293 "num_blocks": 38912, 00:15:09.293 "uuid": "7d350140-39ad-4a8e-b9f9-5dde06dfbc99", 00:15:09.293 "assigned_rate_limits": { 00:15:09.293 "rw_ios_per_sec": 0, 00:15:09.293 "rw_mbytes_per_sec": 0, 00:15:09.293 "r_mbytes_per_sec": 0, 00:15:09.293 "w_mbytes_per_sec": 0 00:15:09.293 }, 00:15:09.293 "claimed": false, 00:15:09.293 "zoned": false, 00:15:09.293 "supported_io_types": { 00:15:09.293 "read": true, 00:15:09.293 "write": true, 00:15:09.293 "unmap": true, 00:15:09.293 "flush": true, 00:15:09.293 "reset": true, 00:15:09.293 "nvme_admin": true, 00:15:09.293 "nvme_io": true, 00:15:09.293 "nvme_io_md": false, 00:15:09.293 "write_zeroes": true, 00:15:09.293 "zcopy": false, 00:15:09.293 "get_zone_info": false, 00:15:09.293 "zone_management": false, 00:15:09.293 "zone_append": false, 00:15:09.293 "compare": true, 00:15:09.293 "compare_and_write": true, 00:15:09.293 "abort": true, 00:15:09.293 "seek_hole": false, 00:15:09.293 "seek_data": false, 00:15:09.293 "copy": true, 00:15:09.293 "nvme_iov_md": false 00:15:09.293 }, 00:15:09.293 "memory_domains": [ 00:15:09.293 { 00:15:09.293 "dma_device_id": "system", 00:15:09.293 "dma_device_type": 1 00:15:09.293 } 00:15:09.293 ], 00:15:09.293 "driver_specific": { 00:15:09.293 "nvme": [ 00:15:09.293 { 00:15:09.293 "trid": { 00:15:09.293 "trtype": "TCP", 00:15:09.293 "adrfam": "IPv4", 00:15:09.293 "traddr": "10.0.0.2", 00:15:09.293 "trsvcid": "4420", 00:15:09.293 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:09.293 }, 00:15:09.293 "ctrlr_data": { 00:15:09.293 "cntlid": 1, 00:15:09.293 "vendor_id": "0x8086", 00:15:09.293 "model_number": "SPDK bdev Controller", 00:15:09.293 "serial_number": "SPDK0", 00:15:09.293 "firmware_revision": "24.09", 00:15:09.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:09.293 "oacs": { 00:15:09.293 "security": 0, 00:15:09.293 "format": 0, 00:15:09.293 "firmware": 0, 00:15:09.293 "ns_manage": 0 00:15:09.293 }, 00:15:09.293 "multi_ctrlr": true, 00:15:09.293 "ana_reporting": false 00:15:09.293 }, 00:15:09.293 "vs": { 00:15:09.293 "nvme_version": "1.3" 00:15:09.293 }, 00:15:09.293 "ns_data": { 00:15:09.293 "id": 1, 00:15:09.293 "can_share": true 00:15:09.293 } 00:15:09.293 } 00:15:09.293 ], 00:15:09.293 "mp_policy": "active_passive" 00:15:09.293 } 00:15:09.293 } 00:15:09.293 ] 00:15:09.293 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3488935 00:15:09.293 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:09.294 11:26:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.294 Running I/O for 10 seconds... 00:15:10.678 Latency(us) 00:15:10.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.678 Nvme0n1 : 1.00 17564.00 68.61 0.00 0.00 0.00 0.00 0.00 00:15:10.678 =================================================================================================================== 00:15:10.678 Total : 17564.00 68.61 0.00 0.00 0.00 0.00 0.00 00:15:10.678 00:15:11.250 11:26:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:11.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.511 Nvme0n1 : 2.00 17642.00 68.91 0.00 0.00 0.00 0.00 0.00 00:15:11.511 =================================================================================================================== 00:15:11.511 Total : 17642.00 68.91 0.00 0.00 0.00 0.00 0.00 00:15:11.511 00:15:11.511 true 00:15:11.511 11:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:11.511 11:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:11.511 11:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:11.511 11:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:11.511 11:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3488935 00:15:12.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.453 Nvme0n1 : 3.00 17662.67 68.99 0.00 0.00 0.00 0.00 0.00 00:15:12.453 =================================================================================================================== 00:15:12.453 Total : 17662.67 68.99 0.00 0.00 0.00 0.00 0.00 00:15:12.453 00:15:13.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.413 Nvme0n1 : 4.00 17697.00 69.13 0.00 0.00 0.00 0.00 0.00 00:15:13.413 =================================================================================================================== 00:15:13.413 Total : 17697.00 69.13 0.00 0.00 0.00 0.00 0.00 00:15:13.413 00:15:14.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.391 Nvme0n1 : 5.00 17722.40 69.23 0.00 0.00 0.00 0.00 0.00 00:15:14.391 =================================================================================================================== 00:15:14.391 Total : 17722.40 69.23 0.00 0.00 0.00 0.00 0.00 00:15:14.391 00:15:15.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.331 Nvme0n1 : 6.00 17744.67 69.32 0.00 0.00 0.00 0.00 0.00 00:15:15.331 =================================================================================================================== 00:15:15.331 Total : 17744.67 69.32 0.00 0.00 0.00 0.00 0.00 00:15:15.331 00:15:16.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.273 Nvme0n1 : 7.00 17761.71 69.38 0.00 0.00 0.00 0.00 0.00 00:15:16.273 =================================================================================================================== 00:15:16.273 Total : 17761.71 69.38 0.00 0.00 0.00 0.00 0.00 00:15:16.273 00:15:17.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.660 Nvme0n1 : 8.00 17777.50 69.44 0.00 0.00 0.00 0.00 0.00 00:15:17.660 =================================================================================================================== 00:15:17.660 Total : 17777.50 69.44 0.00 0.00 0.00 0.00 0.00 00:15:17.660 00:15:18.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.602 Nvme0n1 : 9.00 17788.89 69.49 0.00 0.00 0.00 0.00 0.00 00:15:18.602 =================================================================================================================== 00:15:18.602 Total : 17788.89 69.49 0.00 0.00 0.00 0.00 0.00 00:15:18.602 00:15:19.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.545 Nvme0n1 : 10.00 17799.60 69.53 0.00 0.00 0.00 0.00 0.00 00:15:19.545 =================================================================================================================== 00:15:19.545 Total : 17799.60 69.53 0.00 0.00 0.00 0.00 0.00 00:15:19.545 00:15:19.545 00:15:19.545 Latency(us) 00:15:19.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.545 Nvme0n1 : 10.01 17800.16 69.53 0.00 0.00 7185.77 3686.40 11195.73 00:15:19.545 =================================================================================================================== 00:15:19.545 Total : 17800.16 69.53 0.00 0.00 7185.77 3686.40 11195.73 00:15:19.545 0 00:15:19.545 11:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3488605 00:15:19.545 11:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3488605 ']' 00:15:19.545 11:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3488605 00:15:19.545 11:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3488605 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3488605' 00:15:19.545 killing process with pid 3488605 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3488605 00:15:19.545 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.545 00:15:19.545 Latency(us) 00:15:19.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.545 =================================================================================================================== 00:15:19.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3488605 00:15:19.545 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.806 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:19.806 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:19.806 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:20.067 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:20.067 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:20.067 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:20.067 [2024-07-15 11:26:48.752500] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:20.327 request: 00:15:20.327 { 00:15:20.327 "uuid": "e2761196-f99e-4fc3-84ef-970acafcd5ae", 00:15:20.327 "method": "bdev_lvol_get_lvstores", 00:15:20.327 "req_id": 1 00:15:20.327 } 00:15:20.327 Got JSON-RPC error response 00:15:20.327 response: 00:15:20.327 { 00:15:20.327 "code": -19, 00:15:20.327 "message": "No such device" 00:15:20.327 } 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.327 11:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:20.589 aio_bdev 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d350140-39ad-4a8e-b9f9-5dde06dfbc99 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7d350140-39ad-4a8e-b9f9-5dde06dfbc99 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:20.589 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d350140-39ad-4a8e-b9f9-5dde06dfbc99 -t 2000 00:15:20.850 [ 00:15:20.850 { 00:15:20.850 "name": "7d350140-39ad-4a8e-b9f9-5dde06dfbc99", 00:15:20.850 "aliases": [ 00:15:20.850 "lvs/lvol" 00:15:20.850 ], 00:15:20.850 "product_name": "Logical Volume", 00:15:20.850 "block_size": 4096, 00:15:20.850 "num_blocks": 38912, 00:15:20.850 "uuid": "7d350140-39ad-4a8e-b9f9-5dde06dfbc99", 00:15:20.850 "assigned_rate_limits": { 00:15:20.850 "rw_ios_per_sec": 0, 00:15:20.850 "rw_mbytes_per_sec": 0, 00:15:20.850 "r_mbytes_per_sec": 0, 00:15:20.850 "w_mbytes_per_sec": 0 00:15:20.850 }, 00:15:20.850 "claimed": false, 00:15:20.850 "zoned": false, 00:15:20.850 "supported_io_types": { 00:15:20.850 "read": true, 00:15:20.850 "write": true, 00:15:20.850 "unmap": true, 00:15:20.850 "flush": false, 00:15:20.850 "reset": true, 00:15:20.850 "nvme_admin": false, 00:15:20.850 "nvme_io": false, 00:15:20.850 "nvme_io_md": false, 00:15:20.850 "write_zeroes": true, 00:15:20.850 "zcopy": false, 00:15:20.850 "get_zone_info": false, 00:15:20.850 "zone_management": false, 00:15:20.850 "zone_append": false, 00:15:20.850 "compare": false, 00:15:20.850 "compare_and_write": false, 00:15:20.850 "abort": false, 00:15:20.850 "seek_hole": true, 00:15:20.850 "seek_data": true, 00:15:20.850 "copy": false, 00:15:20.850 "nvme_iov_md": false 00:15:20.850 }, 00:15:20.850 "driver_specific": { 00:15:20.850 "lvol": { 00:15:20.850 "lvol_store_uuid": "e2761196-f99e-4fc3-84ef-970acafcd5ae", 00:15:20.850 "base_bdev": "aio_bdev", 00:15:20.850 "thin_provision": false, 00:15:20.850 "num_allocated_clusters": 38, 00:15:20.850 "snapshot": false, 00:15:20.850 "clone": false, 00:15:20.850 "esnap_clone": false 00:15:20.850 } 00:15:20.850 } 00:15:20.850 } 00:15:20.850 ] 00:15:20.850 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:20.850 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:20.850 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:21.111 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:21.111 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:21.111 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:21.111 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:21.111 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d350140-39ad-4a8e-b9f9-5dde06dfbc99 00:15:21.372 11:26:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2761196-f99e-4fc3-84ef-970acafcd5ae 00:15:21.372 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:21.633 00:15:21.633 real 0m15.236s 00:15:21.633 user 0m14.901s 00:15:21.633 sys 0m1.350s 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:21.633 ************************************ 00:15:21.633 END TEST lvs_grow_clean 00:15:21.633 ************************************ 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:21.633 ************************************ 00:15:21.633 START TEST lvs_grow_dirty 00:15:21.633 ************************************ 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:21.633 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:21.894 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:21.894 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:22.155 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:22.155 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:22.155 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:22.155 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:22.155 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:22.155 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd lvol 150 00:15:22.415 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2952a57c-5d98-4267-b201-203d89d23a0b 00:15:22.415 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:22.415 11:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:22.415 [2024-07-15 11:26:51.107731] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:22.415 [2024-07-15 11:26:51.107785] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:22.415 true 00:15:22.676 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:22.676 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:22.676 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:22.676 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:22.937 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2952a57c-5d98-4267-b201-203d89d23a0b 00:15:22.937 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:23.198 [2024-07-15 11:26:51.749685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.198 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3491683 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3491683 /var/tmp/bdevperf.sock 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3491683 ']' 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.459 11:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:23.459 [2024-07-15 11:26:51.964849] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:23.459 [2024-07-15 11:26:51.964897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491683 ] 00:15:23.459 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.459 [2024-07-15 11:26:52.038918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.459 [2024-07-15 11:26:52.093057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.030 11:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.030 11:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:24.030 11:26:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:24.602 Nvme0n1 00:15:24.602 11:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:24.602 [ 00:15:24.602 { 00:15:24.602 "name": "Nvme0n1", 00:15:24.602 "aliases": [ 00:15:24.602 "2952a57c-5d98-4267-b201-203d89d23a0b" 00:15:24.602 ], 00:15:24.602 "product_name": "NVMe disk", 00:15:24.602 "block_size": 4096, 00:15:24.602 "num_blocks": 38912, 00:15:24.602 "uuid": "2952a57c-5d98-4267-b201-203d89d23a0b", 00:15:24.602 "assigned_rate_limits": { 00:15:24.602 "rw_ios_per_sec": 0, 00:15:24.602 "rw_mbytes_per_sec": 0, 00:15:24.602 "r_mbytes_per_sec": 0, 00:15:24.602 "w_mbytes_per_sec": 0 00:15:24.602 }, 00:15:24.602 "claimed": false, 00:15:24.602 "zoned": false, 00:15:24.602 "supported_io_types": { 00:15:24.602 "read": true, 00:15:24.602 "write": true, 00:15:24.602 "unmap": true, 00:15:24.602 "flush": true, 00:15:24.602 "reset": true, 00:15:24.602 "nvme_admin": true, 00:15:24.602 "nvme_io": true, 00:15:24.602 "nvme_io_md": false, 00:15:24.602 "write_zeroes": true, 00:15:24.602 "zcopy": false, 00:15:24.602 "get_zone_info": false, 00:15:24.602 "zone_management": false, 00:15:24.602 "zone_append": false, 00:15:24.602 "compare": true, 00:15:24.602 "compare_and_write": true, 00:15:24.602 "abort": true, 00:15:24.602 "seek_hole": false, 00:15:24.602 "seek_data": false, 00:15:24.602 "copy": true, 00:15:24.602 "nvme_iov_md": false 00:15:24.602 }, 00:15:24.602 "memory_domains": [ 00:15:24.602 { 00:15:24.602 "dma_device_id": "system", 00:15:24.602 "dma_device_type": 1 00:15:24.602 } 00:15:24.602 ], 00:15:24.602 "driver_specific": { 00:15:24.602 "nvme": [ 00:15:24.602 { 00:15:24.602 "trid": { 00:15:24.602 "trtype": "TCP", 00:15:24.602 "adrfam": "IPv4", 00:15:24.602 "traddr": "10.0.0.2", 00:15:24.602 "trsvcid": "4420", 00:15:24.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:24.602 }, 00:15:24.602 "ctrlr_data": { 00:15:24.602 "cntlid": 1, 00:15:24.602 "vendor_id": "0x8086", 00:15:24.602 "model_number": "SPDK bdev Controller", 00:15:24.602 "serial_number": "SPDK0", 00:15:24.602 "firmware_revision": "24.09", 00:15:24.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:24.602 "oacs": { 00:15:24.602 "security": 0, 00:15:24.602 "format": 0, 00:15:24.602 "firmware": 0, 00:15:24.602 "ns_manage": 0 00:15:24.602 }, 00:15:24.602 "multi_ctrlr": true, 00:15:24.602 "ana_reporting": false 00:15:24.602 }, 00:15:24.602 "vs": { 00:15:24.602 "nvme_version": "1.3" 00:15:24.602 }, 00:15:24.602 "ns_data": { 00:15:24.602 "id": 1, 00:15:24.602 "can_share": true 00:15:24.602 } 00:15:24.602 } 00:15:24.602 ], 00:15:24.602 "mp_policy": "active_passive" 00:15:24.602 } 00:15:24.602 } 00:15:24.602 ] 00:15:24.602 11:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3492007 00:15:24.602 11:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:24.602 11:26:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:24.602 Running I/O for 10 seconds... 00:15:25.988 Latency(us) 00:15:25.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.988 Nvme0n1 : 1.00 18113.00 70.75 0.00 0.00 0.00 0.00 0.00 00:15:25.988 =================================================================================================================== 00:15:25.988 Total : 18113.00 70.75 0.00 0.00 0.00 0.00 0.00 00:15:25.988 00:15:26.559 11:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:26.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.820 Nvme0n1 : 2.00 18149.50 70.90 0.00 0.00 0.00 0.00 0.00 00:15:26.820 =================================================================================================================== 00:15:26.820 Total : 18149.50 70.90 0.00 0.00 0.00 0.00 0.00 00:15:26.820 00:15:26.820 true 00:15:26.820 11:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:26.820 11:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:27.081 11:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:27.081 11:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:27.081 11:26:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3492007 00:15:27.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.652 Nvme0n1 : 3.00 18209.00 71.13 0.00 0.00 0.00 0.00 0.00 00:15:27.652 =================================================================================================================== 00:15:27.652 Total : 18209.00 71.13 0.00 0.00 0.00 0.00 0.00 00:15:27.652 00:15:28.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.592 Nvme0n1 : 4.00 18242.50 71.26 0.00 0.00 0.00 0.00 0.00 00:15:28.592 =================================================================================================================== 00:15:28.592 Total : 18242.50 71.26 0.00 0.00 0.00 0.00 0.00 00:15:28.592 00:15:29.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.973 Nvme0n1 : 5.00 18263.60 71.34 0.00 0.00 0.00 0.00 0.00 00:15:29.973 =================================================================================================================== 00:15:29.973 Total : 18263.60 71.34 0.00 0.00 0.00 0.00 0.00 00:15:29.973 00:15:30.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.911 Nvme0n1 : 6.00 18282.00 71.41 0.00 0.00 0.00 0.00 0.00 00:15:30.911 =================================================================================================================== 00:15:30.911 Total : 18282.00 71.41 0.00 0.00 0.00 0.00 0.00 00:15:30.911 00:15:31.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.909 Nvme0n1 : 7.00 18294.43 71.46 0.00 0.00 0.00 0.00 0.00 00:15:31.909 =================================================================================================================== 00:15:31.909 Total : 18294.43 71.46 0.00 0.00 0.00 0.00 0.00 00:15:31.909 00:15:32.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.866 Nvme0n1 : 8.00 18312.00 71.53 0.00 0.00 0.00 0.00 0.00 00:15:32.866 =================================================================================================================== 00:15:32.866 Total : 18312.00 71.53 0.00 0.00 0.00 0.00 0.00 00:15:32.866 00:15:33.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.809 Nvme0n1 : 9.00 18312.11 71.53 0.00 0.00 0.00 0.00 0.00 00:15:33.809 =================================================================================================================== 00:15:33.809 Total : 18312.11 71.53 0.00 0.00 0.00 0.00 0.00 00:15:33.809 00:15:34.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.749 Nvme0n1 : 10.00 18331.20 71.61 0.00 0.00 0.00 0.00 0.00 00:15:34.749 =================================================================================================================== 00:15:34.749 Total : 18331.20 71.61 0.00 0.00 0.00 0.00 0.00 00:15:34.749 00:15:34.749 00:15:34.749 Latency(us) 00:15:34.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.749 Nvme0n1 : 10.01 18332.24 71.61 0.00 0.00 6979.61 2880.85 12670.29 00:15:34.749 =================================================================================================================== 00:15:34.749 Total : 18332.24 71.61 0.00 0.00 6979.61 2880.85 12670.29 00:15:34.749 0 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3491683 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3491683 ']' 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3491683 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491683 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491683' 00:15:34.749 killing process with pid 3491683 00:15:34.749 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3491683 00:15:34.749 Received shutdown signal, test time was about 10.000000 seconds 00:15:34.749 00:15:34.749 Latency(us) 00:15:34.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.750 =================================================================================================================== 00:15:34.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:34.750 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3491683 00:15:35.009 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:35.009 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:35.269 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:35.269 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:35.530 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:35.530 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:35.530 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3488045 00:15:35.530 11:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3488045 00:15:35.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3488045 Killed "${NVMF_APP[@]}" "$@" 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3494149 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3494149 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3494149 ']' 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:35.530 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:35.530 [2024-07-15 11:27:04.061664] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:35.530 [2024-07-15 11:27:04.061717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.530 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.530 [2024-07-15 11:27:04.129390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.530 [2024-07-15 11:27:04.193689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.530 [2024-07-15 11:27:04.193724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.530 [2024-07-15 11:27:04.193731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.530 [2024-07-15 11:27:04.193738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.530 [2024-07-15 11:27:04.193743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.530 [2024-07-15 11:27:04.193762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.473 11:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:36.473 [2024-07-15 11:27:04.990584] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:36.473 [2024-07-15 11:27:04.990673] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:36.473 [2024-07-15 11:27:04.990702] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2952a57c-5d98-4267-b201-203d89d23a0b 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=2952a57c-5d98-4267-b201-203d89d23a0b 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:36.473 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2952a57c-5d98-4267-b201-203d89d23a0b -t 2000 00:15:36.733 [ 00:15:36.733 { 00:15:36.733 "name": "2952a57c-5d98-4267-b201-203d89d23a0b", 00:15:36.733 "aliases": [ 00:15:36.733 "lvs/lvol" 00:15:36.733 ], 00:15:36.733 "product_name": "Logical Volume", 00:15:36.733 "block_size": 4096, 00:15:36.733 "num_blocks": 38912, 00:15:36.733 "uuid": "2952a57c-5d98-4267-b201-203d89d23a0b", 00:15:36.733 "assigned_rate_limits": { 00:15:36.733 "rw_ios_per_sec": 0, 00:15:36.733 "rw_mbytes_per_sec": 0, 00:15:36.733 "r_mbytes_per_sec": 0, 00:15:36.733 "w_mbytes_per_sec": 0 00:15:36.733 }, 00:15:36.733 "claimed": false, 00:15:36.733 "zoned": false, 00:15:36.733 "supported_io_types": { 00:15:36.733 "read": true, 00:15:36.733 "write": true, 00:15:36.733 "unmap": true, 00:15:36.733 "flush": false, 00:15:36.733 "reset": true, 00:15:36.733 "nvme_admin": false, 00:15:36.733 "nvme_io": false, 00:15:36.733 "nvme_io_md": false, 00:15:36.733 "write_zeroes": true, 00:15:36.733 "zcopy": false, 00:15:36.733 "get_zone_info": false, 00:15:36.733 "zone_management": false, 00:15:36.733 "zone_append": false, 00:15:36.733 "compare": false, 00:15:36.733 "compare_and_write": false, 00:15:36.733 "abort": false, 00:15:36.733 "seek_hole": true, 00:15:36.733 "seek_data": true, 00:15:36.733 "copy": false, 00:15:36.733 "nvme_iov_md": false 00:15:36.733 }, 00:15:36.733 "driver_specific": { 00:15:36.733 "lvol": { 00:15:36.733 "lvol_store_uuid": "156c8d2a-0dcf-4c60-8785-5dd5017388bd", 00:15:36.733 "base_bdev": "aio_bdev", 00:15:36.733 "thin_provision": false, 00:15:36.733 "num_allocated_clusters": 38, 00:15:36.733 "snapshot": false, 00:15:36.733 "clone": false, 00:15:36.733 "esnap_clone": false 00:15:36.733 } 00:15:36.733 } 00:15:36.733 } 00:15:36.733 ] 00:15:36.733 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:36.733 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:36.733 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:36.994 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:36.994 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:36.994 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:36.994 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:36.994 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:37.255 [2024-07-15 11:27:05.758445] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:37.255 request: 00:15:37.255 { 00:15:37.255 "uuid": "156c8d2a-0dcf-4c60-8785-5dd5017388bd", 00:15:37.255 "method": "bdev_lvol_get_lvstores", 00:15:37.255 "req_id": 1 00:15:37.255 } 00:15:37.255 Got JSON-RPC error response 00:15:37.255 response: 00:15:37.255 { 00:15:37.255 "code": -19, 00:15:37.255 "message": "No such device" 00:15:37.255 } 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:37.255 11:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:37.516 aio_bdev 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2952a57c-5d98-4267-b201-203d89d23a0b 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=2952a57c-5d98-4267-b201-203d89d23a0b 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:37.516 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:37.777 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2952a57c-5d98-4267-b201-203d89d23a0b -t 2000 00:15:37.777 [ 00:15:37.777 { 00:15:37.777 "name": "2952a57c-5d98-4267-b201-203d89d23a0b", 00:15:37.777 "aliases": [ 00:15:37.777 "lvs/lvol" 00:15:37.777 ], 00:15:37.777 "product_name": "Logical Volume", 00:15:37.777 "block_size": 4096, 00:15:37.777 "num_blocks": 38912, 00:15:37.777 "uuid": "2952a57c-5d98-4267-b201-203d89d23a0b", 00:15:37.777 "assigned_rate_limits": { 00:15:37.777 "rw_ios_per_sec": 0, 00:15:37.777 "rw_mbytes_per_sec": 0, 00:15:37.777 "r_mbytes_per_sec": 0, 00:15:37.777 "w_mbytes_per_sec": 0 00:15:37.777 }, 00:15:37.777 "claimed": false, 00:15:37.777 "zoned": false, 00:15:37.777 "supported_io_types": { 00:15:37.777 "read": true, 00:15:37.777 "write": true, 00:15:37.777 "unmap": true, 00:15:37.777 "flush": false, 00:15:37.777 "reset": true, 00:15:37.777 "nvme_admin": false, 00:15:37.777 "nvme_io": false, 00:15:37.777 "nvme_io_md": false, 00:15:37.777 "write_zeroes": true, 00:15:37.777 "zcopy": false, 00:15:37.777 "get_zone_info": false, 00:15:37.777 "zone_management": false, 00:15:37.777 "zone_append": false, 00:15:37.777 "compare": false, 00:15:37.777 "compare_and_write": false, 00:15:37.777 "abort": false, 00:15:37.777 "seek_hole": true, 00:15:37.777 "seek_data": true, 00:15:37.777 "copy": false, 00:15:37.777 "nvme_iov_md": false 00:15:37.777 }, 00:15:37.777 "driver_specific": { 00:15:37.777 "lvol": { 00:15:37.777 "lvol_store_uuid": "156c8d2a-0dcf-4c60-8785-5dd5017388bd", 00:15:37.777 "base_bdev": "aio_bdev", 00:15:37.777 "thin_provision": false, 00:15:37.777 "num_allocated_clusters": 38, 00:15:37.777 "snapshot": false, 00:15:37.777 "clone": false, 00:15:37.777 "esnap_clone": false 00:15:37.777 } 00:15:37.777 } 00:15:37.777 } 00:15:37.777 ] 00:15:37.777 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:37.777 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:37.777 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:38.038 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:38.039 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:38.039 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:38.039 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:38.039 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2952a57c-5d98-4267-b201-203d89d23a0b 00:15:38.299 11:27:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 156c8d2a-0dcf-4c60-8785-5dd5017388bd 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:38.560 00:15:38.560 real 0m16.933s 00:15:38.560 user 0m44.383s 00:15:38.560 sys 0m2.859s 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:38.560 ************************************ 00:15:38.560 END TEST lvs_grow_dirty 00:15:38.560 ************************************ 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:38.560 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:38.821 nvmf_trace.0 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.821 rmmod nvme_tcp 00:15:38.821 rmmod nvme_fabrics 00:15:38.821 rmmod nvme_keyring 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3494149 ']' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3494149 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3494149 ']' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3494149 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3494149 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3494149' 00:15:38.821 killing process with pid 3494149 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3494149 00:15:38.821 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3494149 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.082 11:27:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.998 11:27:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.998 00:15:40.998 real 0m42.949s 00:15:40.998 user 1m5.169s 00:15:40.998 sys 0m9.877s 00:15:40.998 11:27:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.998 11:27:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:40.998 ************************************ 00:15:40.998 END TEST nvmf_lvs_grow 00:15:40.998 ************************************ 00:15:40.998 11:27:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:40.998 11:27:09 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:40.998 11:27:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.998 11:27:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.998 11:27:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.260 ************************************ 00:15:41.260 START TEST nvmf_bdev_io_wait 00:15:41.260 ************************************ 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:41.260 * Looking for test storage... 00:15:41.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.260 11:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:49.404 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:49.404 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.404 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:49.405 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:49.405 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:15:49.405 00:15:49.405 --- 10.0.0.2 ping statistics --- 00:15:49.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.405 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:15:49.405 00:15:49.405 --- 10.0.0.1 ping statistics --- 00:15:49.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.405 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3499344 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3499344 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3499344 ']' 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.405 11:27:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 [2024-07-15 11:27:17.010692] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:49.405 [2024-07-15 11:27:17.010757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.405 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.405 [2024-07-15 11:27:17.081733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.405 [2024-07-15 11:27:17.159203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.405 [2024-07-15 11:27:17.159241] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.405 [2024-07-15 11:27:17.159249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.405 [2024-07-15 11:27:17.159255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.405 [2024-07-15 11:27:17.159261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.405 [2024-07-15 11:27:17.159431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.405 [2024-07-15 11:27:17.159554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.405 [2024-07-15 11:27:17.159714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.405 [2024-07-15 11:27:17.159715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 [2024-07-15 11:27:17.895694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 Malloc0 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.405 [2024-07-15 11:27:17.960413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3499691 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3499693 00:15:49.405 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:49.406 { 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme$subsystem", 00:15:49.406 "trtype": "$TEST_TRANSPORT", 00:15:49.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "$NVMF_PORT", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.406 "hdgst": ${hdgst:-false}, 00:15:49.406 "ddgst": ${ddgst:-false} 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 } 00:15:49.406 EOF 00:15:49.406 )") 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3499695 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3499698 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:49.406 { 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme$subsystem", 00:15:49.406 "trtype": "$TEST_TRANSPORT", 00:15:49.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "$NVMF_PORT", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.406 "hdgst": ${hdgst:-false}, 00:15:49.406 "ddgst": ${ddgst:-false} 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 } 00:15:49.406 EOF 00:15:49.406 )") 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:49.406 { 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme$subsystem", 00:15:49.406 "trtype": "$TEST_TRANSPORT", 00:15:49.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "$NVMF_PORT", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.406 "hdgst": ${hdgst:-false}, 00:15:49.406 "ddgst": ${ddgst:-false} 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 } 00:15:49.406 EOF 00:15:49.406 )") 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:49.406 { 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme$subsystem", 00:15:49.406 "trtype": "$TEST_TRANSPORT", 00:15:49.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "$NVMF_PORT", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.406 "hdgst": ${hdgst:-false}, 00:15:49.406 "ddgst": ${ddgst:-false} 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 } 00:15:49.406 EOF 00:15:49.406 )") 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3499691 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme1", 00:15:49.406 "trtype": "tcp", 00:15:49.406 "traddr": "10.0.0.2", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "4420", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.406 "hdgst": false, 00:15:49.406 "ddgst": false 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 }' 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme1", 00:15:49.406 "trtype": "tcp", 00:15:49.406 "traddr": "10.0.0.2", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "4420", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.406 "hdgst": false, 00:15:49.406 "ddgst": false 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 }' 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme1", 00:15:49.406 "trtype": "tcp", 00:15:49.406 "traddr": "10.0.0.2", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "4420", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.406 "hdgst": false, 00:15:49.406 "ddgst": false 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 }' 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:49.406 11:27:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:49.406 "params": { 00:15:49.406 "name": "Nvme1", 00:15:49.406 "trtype": "tcp", 00:15:49.406 "traddr": "10.0.0.2", 00:15:49.406 "adrfam": "ipv4", 00:15:49.406 "trsvcid": "4420", 00:15:49.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.406 "hdgst": false, 00:15:49.406 "ddgst": false 00:15:49.406 }, 00:15:49.406 "method": "bdev_nvme_attach_controller" 00:15:49.406 }' 00:15:49.406 [2024-07-15 11:27:18.015056] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:49.406 [2024-07-15 11:27:18.015108] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:49.406 [2024-07-15 11:27:18.016791] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:49.406 [2024-07-15 11:27:18.016841] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:49.406 [2024-07-15 11:27:18.016998] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:49.406 [2024-07-15 11:27:18.017042] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:49.406 [2024-07-15 11:27:18.017060] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:15:49.406 [2024-07-15 11:27:18.017102] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:49.406 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.671 [2024-07-15 11:27:18.157853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.671 [2024-07-15 11:27:18.199876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.671 [2024-07-15 11:27:18.209733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:49.671 [2024-07-15 11:27:18.248655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.671 [2024-07-15 11:27:18.249982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:49.671 [2024-07-15 11:27:18.293826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.671 [2024-07-15 11:27:18.299277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:49.671 [2024-07-15 11:27:18.342983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:49.671 Running I/O for 1 seconds... 00:15:49.931 Running I/O for 1 seconds... 00:15:49.931 Running I/O for 1 seconds... 00:15:49.931 Running I/O for 1 seconds... 00:15:50.872 00:15:50.872 Latency(us) 00:15:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.872 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:50.872 Nvme1n1 : 1.01 12973.94 50.68 0.00 0.00 9831.82 5434.03 17913.17 00:15:50.872 =================================================================================================================== 00:15:50.872 Total : 12973.94 50.68 0.00 0.00 9831.82 5434.03 17913.17 00:15:50.872 00:15:50.872 Latency(us) 00:15:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.872 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:50.872 Nvme1n1 : 1.00 12881.90 50.32 0.00 0.00 9907.30 5106.35 20643.84 00:15:50.872 =================================================================================================================== 00:15:50.872 Total : 12881.90 50.32 0.00 0.00 9907.30 5106.35 20643.84 00:15:50.872 00:15:50.872 Latency(us) 00:15:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.872 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:50.872 Nvme1n1 : 1.01 12187.96 47.61 0.00 0.00 10467.27 5679.79 22173.01 00:15:50.872 =================================================================================================================== 00:15:50.872 Total : 12187.96 47.61 0.00 0.00 10467.27 5679.79 22173.01 00:15:50.872 00:15:50.872 Latency(us) 00:15:50.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.872 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:50.872 Nvme1n1 : 1.00 187794.40 733.57 0.00 0.00 679.18 271.36 781.65 00:15:50.872 =================================================================================================================== 00:15:50.872 Total : 187794.40 733.57 0.00 0.00 679.18 271.36 781.65 00:15:50.872 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3499693 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3499695 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3499698 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.132 rmmod nvme_tcp 00:15:51.132 rmmod nvme_fabrics 00:15:51.132 rmmod nvme_keyring 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3499344 ']' 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3499344 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3499344 ']' 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3499344 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3499344 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3499344' 00:15:51.132 killing process with pid 3499344 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3499344 00:15:51.132 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3499344 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.393 11:27:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.304 11:27:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.304 00:15:53.304 real 0m12.246s 00:15:53.304 user 0m18.142s 00:15:53.304 sys 0m6.582s 00:15:53.304 11:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.304 11:27:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:53.304 ************************************ 00:15:53.304 END TEST nvmf_bdev_io_wait 00:15:53.304 ************************************ 00:15:53.566 11:27:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.566 11:27:22 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:53.566 11:27:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.566 11:27:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.566 11:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.566 ************************************ 00:15:53.566 START TEST nvmf_queue_depth 00:15:53.566 ************************************ 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:53.566 * Looking for test storage... 00:15:53.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.566 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.567 11:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:00.213 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:00.213 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:00.213 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:00.213 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.213 11:27:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:00.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:16:00.474 00:16:00.474 --- 10.0.0.2 ping statistics --- 00:16:00.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.474 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:16:00.474 00:16:00.474 --- 10.0.0.1 ping statistics --- 00:16:00.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.474 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:00.474 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3504056 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3504056 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3504056 ']' 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.735 11:27:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.735 [2024-07-15 11:27:29.278828] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:16:00.735 [2024-07-15 11:27:29.278891] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.735 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.735 [2024-07-15 11:27:29.369001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.995 [2024-07-15 11:27:29.463503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.995 [2024-07-15 11:27:29.463562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.995 [2024-07-15 11:27:29.463569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.995 [2024-07-15 11:27:29.463577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.995 [2024-07-15 11:27:29.463583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.995 [2024-07-15 11:27:29.463618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.566 [2024-07-15 11:27:30.117744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.566 Malloc0 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.566 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.567 [2024-07-15 11:27:30.185456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3504398 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3504398 /var/tmp/bdevperf.sock 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3504398 ']' 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.567 11:27:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.567 [2024-07-15 11:27:30.237485] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:16:01.567 [2024-07-15 11:27:30.237541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504398 ] 00:16:01.567 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.827 [2024-07-15 11:27:30.297658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.827 [2024-07-15 11:27:30.364944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.398 11:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.398 11:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:02.398 11:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:02.398 11:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.398 11:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:02.659 NVMe0n1 00:16:02.659 11:27:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.659 11:27:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:02.659 Running I/O for 10 seconds... 00:16:14.889 00:16:14.889 Latency(us) 00:16:14.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.889 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:14.889 Verification LBA range: start 0x0 length 0x4000 00:16:14.889 NVMe0n1 : 10.06 11759.35 45.93 0.00 0.00 86729.62 18240.85 63351.47 00:16:14.889 =================================================================================================================== 00:16:14.889 Total : 11759.35 45.93 0.00 0.00 86729.62 18240.85 63351.47 00:16:14.889 0 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3504398 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3504398 ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3504398 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3504398 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3504398' 00:16:14.889 killing process with pid 3504398 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3504398 00:16:14.889 Received shutdown signal, test time was about 10.000000 seconds 00:16:14.889 00:16:14.889 Latency(us) 00:16:14.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.889 =================================================================================================================== 00:16:14.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3504398 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.889 rmmod nvme_tcp 00:16:14.889 rmmod nvme_fabrics 00:16:14.889 rmmod nvme_keyring 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3504056 ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3504056 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3504056 ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3504056 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3504056 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3504056' 00:16:14.889 killing process with pid 3504056 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3504056 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3504056 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.889 11:27:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.461 11:27:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:15.461 00:16:15.461 real 0m21.913s 00:16:15.461 user 0m25.724s 00:16:15.461 sys 0m6.422s 00:16:15.461 11:27:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.461 11:27:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:15.461 ************************************ 00:16:15.461 END TEST nvmf_queue_depth 00:16:15.461 ************************************ 00:16:15.461 11:27:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:15.461 11:27:43 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:15.461 11:27:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.461 11:27:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.461 11:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.461 ************************************ 00:16:15.461 START TEST nvmf_target_multipath 00:16:15.461 ************************************ 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:15.461 * Looking for test storage... 00:16:15.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.461 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.462 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.462 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.462 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.462 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.723 11:27:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:22.316 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:22.317 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:22.317 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:22.317 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:22.317 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.317 11:27:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:16:22.578 00:16:22.578 --- 10.0.0.2 ping statistics --- 00:16:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.578 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:16:22.578 00:16:22.578 --- 10.0.0.1 ping statistics --- 00:16:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.578 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.578 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:22.838 only one NIC for nvmf test 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.838 rmmod nvme_tcp 00:16:22.838 rmmod nvme_fabrics 00:16:22.838 rmmod nvme_keyring 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.838 11:27:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:24.758 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:25.018 00:16:25.018 real 0m9.437s 00:16:25.018 user 0m2.034s 00:16:25.018 sys 0m5.325s 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.018 11:27:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 ************************************ 00:16:25.018 END TEST nvmf_target_multipath 00:16:25.018 ************************************ 00:16:25.018 11:27:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:25.018 11:27:53 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:25.018 11:27:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:25.018 11:27:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.018 11:27:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 ************************************ 00:16:25.018 START TEST nvmf_zcopy 00:16:25.018 ************************************ 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:25.018 * Looking for test storage... 00:16:25.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.018 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:25.019 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:25.019 11:27:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:25.019 11:27:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:33.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:33.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:33.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:33.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.161 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:16:33.162 00:16:33.162 --- 10.0.0.2 ping statistics --- 00:16:33.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.162 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:16:33.162 00:16:33.162 --- 10.0.0.1 ping statistics --- 00:16:33.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.162 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3514723 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3514723 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3514723 ']' 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.162 11:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 [2024-07-15 11:28:00.760283] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:16:33.162 [2024-07-15 11:28:00.760371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.162 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.162 [2024-07-15 11:28:00.847298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.162 [2024-07-15 11:28:00.940857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.162 [2024-07-15 11:28:00.940908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.162 [2024-07-15 11:28:00.940916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.162 [2024-07-15 11:28:00.940923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.162 [2024-07-15 11:28:00.940929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.162 [2024-07-15 11:28:00.940965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 [2024-07-15 11:28:01.584386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 [2024-07-15 11:28:01.608604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 malloc0 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:33.162 { 00:16:33.162 "params": { 00:16:33.162 "name": "Nvme$subsystem", 00:16:33.162 "trtype": "$TEST_TRANSPORT", 00:16:33.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.162 "adrfam": "ipv4", 00:16:33.162 "trsvcid": "$NVMF_PORT", 00:16:33.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.162 "hdgst": ${hdgst:-false}, 00:16:33.162 "ddgst": ${ddgst:-false} 00:16:33.162 }, 00:16:33.162 "method": "bdev_nvme_attach_controller" 00:16:33.162 } 00:16:33.162 EOF 00:16:33.162 )") 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:33.162 11:28:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:33.162 "params": { 00:16:33.162 "name": "Nvme1", 00:16:33.162 "trtype": "tcp", 00:16:33.162 "traddr": "10.0.0.2", 00:16:33.162 "adrfam": "ipv4", 00:16:33.162 "trsvcid": "4420", 00:16:33.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.162 "hdgst": false, 00:16:33.162 "ddgst": false 00:16:33.162 }, 00:16:33.162 "method": "bdev_nvme_attach_controller" 00:16:33.162 }' 00:16:33.162 [2024-07-15 11:28:01.708119] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:16:33.162 [2024-07-15 11:28:01.708213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515068 ] 00:16:33.162 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.162 [2024-07-15 11:28:01.774391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.162 [2024-07-15 11:28:01.849074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.423 Running I/O for 10 seconds... 00:16:43.479 00:16:43.479 Latency(us) 00:16:43.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.479 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:43.479 Verification LBA range: start 0x0 length 0x1000 00:16:43.479 Nvme1n1 : 10.01 9267.44 72.40 0.00 0.00 13760.44 2293.76 32112.64 00:16:43.479 =================================================================================================================== 00:16:43.479 Total : 9267.44 72.40 0.00 0.00 13760.44 2293.76 32112.64 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3517087 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:43.739 { 00:16:43.739 "params": { 00:16:43.739 "name": "Nvme$subsystem", 00:16:43.739 "trtype": "$TEST_TRANSPORT", 00:16:43.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.739 "adrfam": "ipv4", 00:16:43.739 "trsvcid": "$NVMF_PORT", 00:16:43.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.739 "hdgst": ${hdgst:-false}, 00:16:43.739 "ddgst": ${ddgst:-false} 00:16:43.739 }, 00:16:43.739 "method": "bdev_nvme_attach_controller" 00:16:43.739 } 00:16:43.739 EOF 00:16:43.739 )") 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:43.739 [2024-07-15 11:28:12.293447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.739 [2024-07-15 11:28:12.293473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:43.739 11:28:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:43.739 "params": { 00:16:43.739 "name": "Nvme1", 00:16:43.739 "trtype": "tcp", 00:16:43.739 "traddr": "10.0.0.2", 00:16:43.739 "adrfam": "ipv4", 00:16:43.739 "trsvcid": "4420", 00:16:43.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:43.739 "hdgst": false, 00:16:43.739 "ddgst": false 00:16:43.739 }, 00:16:43.739 "method": "bdev_nvme_attach_controller" 00:16:43.739 }' 00:16:43.739 [2024-07-15 11:28:12.305452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.739 [2024-07-15 11:28:12.305461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.739 [2024-07-15 11:28:12.317482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.739 [2024-07-15 11:28:12.317490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.739 [2024-07-15 11:28:12.329513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.739 [2024-07-15 11:28:12.329521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.739 [2024-07-15 11:28:12.333572] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:16:43.739 [2024-07-15 11:28:12.333620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517087 ] 00:16:43.739 [2024-07-15 11:28:12.341542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.341550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.353573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.353581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.740 [2024-07-15 11:28:12.365603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.365611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.377634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.377642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.389666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.389674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.391435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.740 [2024-07-15 11:28:12.401698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.401706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.413729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.413737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.425760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.425773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.740 [2024-07-15 11:28:12.437790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.740 [2024-07-15 11:28:12.437799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.449820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.449830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.455407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.999 [2024-07-15 11:28:12.461850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.461857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.473885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.473899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.485912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.485921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.497942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.497950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.509972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.509979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.522002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.522009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.534041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.534055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.546067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.546076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.999 [2024-07-15 11:28:12.558098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.999 [2024-07-15 11:28:12.558108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.570132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.570140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.582161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.582169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.594193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.594201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.606228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.606237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.618258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.618267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.630288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.630295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.642319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.642327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.654350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.654359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.666382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.666389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.678413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.678421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.000 [2024-07-15 11:28:12.690446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.000 [2024-07-15 11:28:12.690453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.259 [2024-07-15 11:28:12.702477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.702487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.714508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.714515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.726542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.726550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.738573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.738581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.752080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.752094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.762638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.762648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 Running I/O for 5 seconds... 00:16:44.260 [2024-07-15 11:28:12.779898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.779916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.795589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.795606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.806030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.806046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.822167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.822184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.837674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.837691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.852379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.852396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.863531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.863547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.880002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.880018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.895214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.895230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.909238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.909254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.925631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.925649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.941163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.941180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.260 [2024-07-15 11:28:12.957613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.260 [2024-07-15 11:28:12.957630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:12.973622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:12.973640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:12.988298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:12.988315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.003518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.003534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.019476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.019493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.035924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.035942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.047931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.047959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.062929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.062946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.074590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.074607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.090615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.090632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.106546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.106564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.121466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.121483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.137498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.137515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.151435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.151452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.167674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.167691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.183274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.183291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.195255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.195272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.520 [2024-07-15 11:28:13.211841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.520 [2024-07-15 11:28:13.211859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.227991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.228009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.244555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.244572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.260518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.260535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.275847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.275865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.292038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.292055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.303558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.303574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.320165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.320181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.336026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.336053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.352373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.780 [2024-07-15 11:28:13.352389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.780 [2024-07-15 11:28:13.363265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.363282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.380132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.380148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.395709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.395727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.407109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.407130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.423088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.423105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.438935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.438952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.453410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.453426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.464715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.464731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.781 [2024-07-15 11:28:13.481197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.781 [2024-07-15 11:28:13.481214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.497024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.497041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.512614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.512631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.528193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.528210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.544826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.544842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.561489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.561505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.576986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.577004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.592443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.592460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.608343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.608360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.619814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.619839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.636020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.636036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.652005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.652021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.667947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.667963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.682572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.682590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.698216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.698232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.714556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.714571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.066 [2024-07-15 11:28:13.732138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.066 [2024-07-15 11:28:13.732156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.748264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.748282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.764504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.764520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.781320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.781336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.798018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.798035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.813644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.813661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.826392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.826408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.842629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.842645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.859027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.859042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.875136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.875153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.890499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.890515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.907315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.907331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.923372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.923394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.939294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.939310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.953901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.953917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.969282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.969298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.326 [2024-07-15 11:28:13.985752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.326 [2024-07-15 11:28:13.985768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.327 [2024-07-15 11:28:14.001668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.327 [2024-07-15 11:28:14.001684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.327 [2024-07-15 11:28:14.016406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.327 [2024-07-15 11:28:14.016422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.031716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.031733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.047444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.047460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.062763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.062780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.079350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.079365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.095271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.095296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.107337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.107355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.122982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.123000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.139473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.139489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.155447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.155464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.169312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.169329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.184652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.184669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.201704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.201721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.218166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.586 [2024-07-15 11:28:14.218183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.586 [2024-07-15 11:28:14.234141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.587 [2024-07-15 11:28:14.234159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.587 [2024-07-15 11:28:14.248882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.587 [2024-07-15 11:28:14.248899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.587 [2024-07-15 11:28:14.263642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.587 [2024-07-15 11:28:14.263658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.587 [2024-07-15 11:28:14.275099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.587 [2024-07-15 11:28:14.275115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.291764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.291780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.307318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.307335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.318651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.318668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.334464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.334480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.350402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.350418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.364295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.364311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.379771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.379787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.396221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.396238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.411780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.411796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.426349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.426365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.442340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.442356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.458456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.458472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.475090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.475107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.491410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.491427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.503296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.503313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.518603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.518619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:45.846 [2024-07-15 11:28:14.534495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:45.846 [2024-07-15 11:28:14.534510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.550784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.550800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.561456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.561472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.577638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.577654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.593567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.593585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.607681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.607698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.623681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.623697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.639455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.639471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.655309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.655325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.669769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.669786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.685147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.685164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.701224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.701240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.717067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.717084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.732015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.732032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.748778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.748795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.764632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.764649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.780598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.780614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.106 [2024-07-15 11:28:14.796518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.106 [2024-07-15 11:28:14.796535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.812158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.812176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.827337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.827353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.843290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.843306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.857757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.857773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.873057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.873074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.888426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.888443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.902386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.902402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.918577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.918593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.934723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.934739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.950225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.950243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.966067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.966084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.979720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.979737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:14.995827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:14.995844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:15.011646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:15.011663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:15.023184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:15.023201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:15.038435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:15.038452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.367 [2024-07-15 11:28:15.054350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.367 [2024-07-15 11:28:15.054367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.068671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.068688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.084044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.084061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.100283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.100299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.116327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.116343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.132684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.132701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.143908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.143924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.160038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.160054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.175685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.175702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.190186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.190202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.205603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.205619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.221592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.221608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.237603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.237620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.251510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.251526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.267555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.267570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.283366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.283382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.298520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.298537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.628 [2024-07-15 11:28:15.314931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.628 [2024-07-15 11:28:15.314947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.330819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.330835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.341875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.341892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.358145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.358167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.373839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.373856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.388419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.388435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.397039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.397055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.405600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.405616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.414382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.414398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.422968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.422984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.431545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.431561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.445945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.445961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.462149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.462166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.476691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.476707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.486968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.486984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.502885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.502900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.518585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.518600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.534951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.534968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.551851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.551867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.897 [2024-07-15 11:28:15.567710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.897 [2024-07-15 11:28:15.567726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.898 [2024-07-15 11:28:15.581636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.898 [2024-07-15 11:28:15.581652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.898 [2024-07-15 11:28:15.596561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:46.898 [2024-07-15 11:28:15.596578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.607142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.607166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.623518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.623534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.639255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.639271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.654359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.654374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.671329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.671344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.687234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.687250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.701767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.701783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.716383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.716398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.732064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.732080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.748205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.748222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.761827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.761843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.778145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.778161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.794331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.794347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.810382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.810399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.825260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.825275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.841557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.841573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.157 [2024-07-15 11:28:15.855844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.157 [2024-07-15 11:28:15.855860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.870835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.870852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.881318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.881333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.897722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.897745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.914093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.914109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.930038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.930055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.941811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.941827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.957945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.957961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.974215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.974232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:15.985989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:15.986005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:16.001478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:16.001493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:16.017740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:16.017757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:16.029295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:16.029312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:16.045967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.416 [2024-07-15 11:28:16.045984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.416 [2024-07-15 11:28:16.061949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.417 [2024-07-15 11:28:16.061965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.417 [2024-07-15 11:28:16.078213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.417 [2024-07-15 11:28:16.078229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.417 [2024-07-15 11:28:16.094368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.417 [2024-07-15 11:28:16.094384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.417 [2024-07-15 11:28:16.110004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.417 [2024-07-15 11:28:16.110020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.124809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.124826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.135087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.135104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.151628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.151645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.167584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.167600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.184188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.184211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.200250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.200267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.215942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.215959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.232320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.232336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.248453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.248470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.263686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.263702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.279456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.279472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.295381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.295397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.311015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.311032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.322849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.322865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.339876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.339892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.354997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.355013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.676 [2024-07-15 11:28:16.366868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.676 [2024-07-15 11:28:16.366885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.381918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.381935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.393088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.393104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.409368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.409384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.425488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.425506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.441447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.441464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.453066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.453083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.470050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.470068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.486179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.486195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.502102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.502119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.513265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.513282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.528703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.528720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.545045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.545061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.561244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.561260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.572687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.572704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.588200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.588218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.604265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.604282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.615718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.615734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:47.936 [2024-07-15 11:28:16.632085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:47.936 [2024-07-15 11:28:16.632101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.648088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.648105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.658323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.658339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.675006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.675023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.691194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.691210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.707092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.707108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.722964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.722980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.737853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.737869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.752049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.752066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.767143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.767160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.778989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.779007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.794041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.794058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.805060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.805077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.821248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.821266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.837613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.837630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.853323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.853340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.868347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.868363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.195 [2024-07-15 11:28:16.882564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.195 [2024-07-15 11:28:16.882582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.898700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.898718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.914359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.914376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.929465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.929480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.945745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.945762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.957023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.957039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.973552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.973570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:16.989437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:16.989454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.005986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.006003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.017877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.017894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.034174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.034191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.050213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.050229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.066175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.066192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.081488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.081504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.097228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.097245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.112096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.112114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.127252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.127268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.455 [2024-07-15 11:28:17.143593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.455 [2024-07-15 11:28:17.143609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.160112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.160135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.176493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.176509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.192458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.192475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.205724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.205740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.221739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.221755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.238053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.238070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.253602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.253619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.268404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.268420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.283152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.283168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.298289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.298305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.314730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.314746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.330235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.330252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.343915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.343931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.360032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.360048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.375503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.715 [2024-07-15 11:28:17.375519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.715 [2024-07-15 11:28:17.386919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.716 [2024-07-15 11:28:17.386936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.716 [2024-07-15 11:28:17.403014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.716 [2024-07-15 11:28:17.403030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.418325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.418341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.426809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.426825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.435793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.435809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.444238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.444253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.452850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.452866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.461593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.461609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.470170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.470186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.478749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.478764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.487230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.487246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.495929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.495944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.504511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.504526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.513134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.513150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.521602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.521625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.530391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.530407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.539106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.539128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.548102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.548117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.556891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.556907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.565700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.565715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.574374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.574389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.583225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.583241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.592171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.592186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.600775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.600790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.609649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.609664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.618815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.618831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.627321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.627337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.635956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.635972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.645891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.645907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.654636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.654652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.663180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.663195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:48.975 [2024-07-15 11:28:17.672070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:48.975 [2024-07-15 11:28:17.672086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.680936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.680952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.689910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.689933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.698775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.698790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.707465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.707480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.716181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.716198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.725100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.725116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.733851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.733867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.742360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.742376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.751106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.751127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.759722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.759738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.768314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.768329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.776937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.776952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.784439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.784455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 00:16:49.235 Latency(us) 00:16:49.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.235 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:49.235 Nvme1n1 : 5.01 14613.04 114.16 0.00 0.00 8749.98 4068.69 19551.57 00:16:49.235 =================================================================================================================== 00:16:49.235 Total : 14613.04 114.16 0.00 0.00 8749.98 4068.69 19551.57 00:16:49.235 [2024-07-15 11:28:17.791355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.791368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.799371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.799381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.807394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.807403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.815416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.815427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.823434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.823455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.831454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.831464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.839491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.839501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.847493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.847500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.855513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.855521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.863534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.863541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.871554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.871562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.879576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.879585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.887593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.887600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.895616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.895624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.903635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.903644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 [2024-07-15 11:28:17.911655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:49.235 [2024-07-15 11:28:17.911662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3517087) - No such process 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3517087 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:49.235 delay0 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.235 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:49.495 11:28:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.495 11:28:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:49.495 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.495 [2024-07-15 11:28:18.040779] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:56.074 Initializing NVMe Controllers 00:16:56.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:56.074 Initialization complete. Launching workers. 00:16:56.074 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 154 00:16:56.074 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 437, failed to submit 37 00:16:56.074 success 258, unsuccess 179, failed 0 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.074 rmmod nvme_tcp 00:16:56.074 rmmod nvme_fabrics 00:16:56.074 rmmod nvme_keyring 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3514723 ']' 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3514723 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3514723 ']' 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3514723 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:56.074 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3514723 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3514723' 00:16:56.075 killing process with pid 3514723 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3514723 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3514723 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.075 11:28:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.984 11:28:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:57.984 00:16:57.984 real 0m33.009s 00:16:57.984 user 0m43.072s 00:16:57.984 sys 0m10.459s 00:16:57.984 11:28:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.984 11:28:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:57.984 ************************************ 00:16:57.984 END TEST nvmf_zcopy 00:16:57.984 ************************************ 00:16:57.984 11:28:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:57.984 11:28:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:57.984 11:28:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.984 11:28:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.984 11:28:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.984 ************************************ 00:16:57.984 START TEST nvmf_nmic 00:16:57.984 ************************************ 00:16:57.984 11:28:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:58.251 * Looking for test storage... 00:16:58.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.251 11:28:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:04.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:04.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.847 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:04.847 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:04.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.848 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:17:05.108 00:17:05.108 --- 10.0.0.2 ping statistics --- 00:17:05.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.108 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:17:05.108 00:17:05.108 --- 10.0.0.1 ping statistics --- 00:17:05.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.108 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3523424 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3523424 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3523424 ']' 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.108 11:28:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 [2024-07-15 11:28:33.797535] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:17:05.108 [2024-07-15 11:28:33.797585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.368 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.368 [2024-07-15 11:28:33.864136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.368 [2024-07-15 11:28:33.931100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.368 [2024-07-15 11:28:33.931141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.368 [2024-07-15 11:28:33.931149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.368 [2024-07-15 11:28:33.931155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.368 [2024-07-15 11:28:33.931161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.368 [2024-07-15 11:28:33.931230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.368 [2024-07-15 11:28:33.931455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.368 [2024-07-15 11:28:33.931610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.368 [2024-07-15 11:28:33.931610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 [2024-07-15 11:28:34.614787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 Malloc0 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 [2024-07-15 11:28:34.658245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:06.003 test case1: single bdev can't be used in multiple subsystems 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.003 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.290 [2024-07-15 11:28:34.682135] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:06.290 [2024-07-15 11:28:34.682154] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:06.290 [2024-07-15 11:28:34.682161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:06.290 request: 00:17:06.290 { 00:17:06.290 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:06.290 "namespace": { 00:17:06.290 "bdev_name": "Malloc0", 00:17:06.290 "no_auto_visible": false 00:17:06.290 }, 00:17:06.290 "method": "nvmf_subsystem_add_ns", 00:17:06.290 "req_id": 1 00:17:06.290 } 00:17:06.290 Got JSON-RPC error response 00:17:06.290 response: 00:17:06.290 { 00:17:06.290 "code": -32602, 00:17:06.290 "message": "Invalid parameters" 00:17:06.290 } 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:06.290 Adding namespace failed - expected result. 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:06.290 test case2: host connect to nvmf target in multiple paths 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:06.290 [2024-07-15 11:28:34.694250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.290 11:28:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.671 11:28:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:09.582 11:28:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:09.582 11:28:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.582 11:28:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.582 11:28:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:09.582 11:28:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:11.490 11:28:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:11.490 [global] 00:17:11.490 thread=1 00:17:11.490 invalidate=1 00:17:11.490 rw=write 00:17:11.490 time_based=1 00:17:11.490 runtime=1 00:17:11.490 ioengine=libaio 00:17:11.490 direct=1 00:17:11.490 bs=4096 00:17:11.490 iodepth=1 00:17:11.490 norandommap=0 00:17:11.490 numjobs=1 00:17:11.490 00:17:11.490 verify_dump=1 00:17:11.491 verify_backlog=512 00:17:11.491 verify_state_save=0 00:17:11.491 do_verify=1 00:17:11.491 verify=crc32c-intel 00:17:11.491 [job0] 00:17:11.491 filename=/dev/nvme0n1 00:17:11.491 Could not set queue depth (nvme0n1) 00:17:11.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.491 fio-3.35 00:17:11.491 Starting 1 thread 00:17:12.874 00:17:12.874 job0: (groupid=0, jobs=1): err= 0: pid=3524963: Mon Jul 15 11:28:41 2024 00:17:12.874 read: IOPS=203, BW=813KiB/s (832kB/s)(816KiB/1004msec) 00:17:12.874 slat (nsec): min=8513, max=58647, avg=24820.61, stdev=4329.50 00:17:12.874 clat (usec): min=873, max=42038, avg=3089.81, stdev=8748.74 00:17:12.874 lat (usec): min=882, max=42063, avg=3114.63, stdev=8748.91 00:17:12.874 clat percentiles (usec): 00:17:12.874 | 1.00th=[ 922], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1074], 00:17:12.874 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1123], 00:17:12.874 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1270], 00:17:12.874 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:12.874 | 99.99th=[42206] 00:17:12.874 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:17:12.874 slat (nsec): min=9170, max=61405, avg=26928.55, stdev=9237.14 00:17:12.874 clat (usec): min=368, max=923, avg=682.96, stdev=86.41 00:17:12.874 lat (usec): min=378, max=934, avg=709.89, stdev=90.29 00:17:12.874 clat percentiles (usec): 00:17:12.874 | 1.00th=[ 461], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 635], 00:17:12.874 | 30.00th=[ 652], 40.00th=[ 668], 50.00th=[ 676], 60.00th=[ 709], 00:17:12.874 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 783], 95.00th=[ 807], 00:17:12.874 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 922], 00:17:12.874 | 99.99th=[ 922] 00:17:12.874 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:12.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:12.874 lat (usec) : 500=3.21%, 750=52.09%, 1000=17.74% 00:17:12.874 lat (msec) : 2=25.56%, 50=1.40% 00:17:12.874 cpu : usr=1.10%, sys=1.79%, ctx=716, majf=0, minf=1 00:17:12.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.874 issued rwts: total=204,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:12.874 00:17:12.874 Run status group 0 (all jobs): 00:17:12.874 READ: bw=813KiB/s (832kB/s), 813KiB/s-813KiB/s (832kB/s-832kB/s), io=816KiB (836kB), run=1004-1004msec 00:17:12.874 WRITE: bw=2040KiB/s (2089kB/s), 2040KiB/s-2040KiB/s (2089kB/s-2089kB/s), io=2048KiB (2097kB), run=1004-1004msec 00:17:12.874 00:17:12.874 Disk stats (read/write): 00:17:12.874 nvme0n1: ios=251/512, merge=0/0, ticks=565/344, in_queue=909, util=93.99% 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.874 rmmod nvme_tcp 00:17:12.874 rmmod nvme_fabrics 00:17:12.874 rmmod nvme_keyring 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3523424 ']' 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3523424 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3523424 ']' 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3523424 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.874 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3523424 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3523424' 00:17:13.136 killing process with pid 3523424 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3523424 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3523424 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.136 11:28:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.677 11:28:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.677 00:17:15.677 real 0m17.205s 00:17:15.677 user 0m47.422s 00:17:15.677 sys 0m5.916s 00:17:15.677 11:28:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:15.677 11:28:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:15.677 ************************************ 00:17:15.677 END TEST nvmf_nmic 00:17:15.677 ************************************ 00:17:15.677 11:28:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:15.677 11:28:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:15.677 11:28:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:15.677 11:28:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.677 11:28:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.677 ************************************ 00:17:15.677 START TEST nvmf_fio_target 00:17:15.677 ************************************ 00:17:15.677 11:28:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:15.677 * Looking for test storage... 00:17:15.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.677 11:28:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.678 11:28:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:22.261 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:22.261 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:22.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:22.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:22.261 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:22.262 11:28:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:22.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:17:22.521 00:17:22.521 --- 10.0.0.2 ping statistics --- 00:17:22.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.521 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:17:22.521 00:17:22.521 --- 10.0.0.1 ping statistics --- 00:17:22.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.521 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:22.521 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3529298 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3529298 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3529298 ']' 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.781 11:28:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.781 [2024-07-15 11:28:51.303297] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:17:22.781 [2024-07-15 11:28:51.303348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.781 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.781 [2024-07-15 11:28:51.368837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.781 [2024-07-15 11:28:51.434434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.781 [2024-07-15 11:28:51.434468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.781 [2024-07-15 11:28:51.434475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.781 [2024-07-15 11:28:51.434482] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.781 [2024-07-15 11:28:51.434487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.781 [2024-07-15 11:28:51.434630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.781 [2024-07-15 11:28:51.434761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.781 [2024-07-15 11:28:51.434917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.781 [2024-07-15 11:28:51.434918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:23.719 [2024-07-15 11:28:52.253224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.719 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:23.978 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:23.978 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:23.978 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:23.978 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.238 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:24.238 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.497 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:24.497 11:28:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:24.497 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.757 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:24.757 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:25.019 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:25.019 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:25.019 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:25.019 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:25.279 11:28:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:25.539 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:25.539 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.539 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:25.539 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:25.799 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.799 [2024-07-15 11:28:54.498658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.059 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:26.059 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:26.319 11:28:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:27.704 11:28:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:27.704 11:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:27.704 11:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.704 11:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:27.704 11:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:27.704 11:28:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:30.246 11:28:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:30.246 [global] 00:17:30.246 thread=1 00:17:30.246 invalidate=1 00:17:30.246 rw=write 00:17:30.246 time_based=1 00:17:30.246 runtime=1 00:17:30.246 ioengine=libaio 00:17:30.246 direct=1 00:17:30.246 bs=4096 00:17:30.246 iodepth=1 00:17:30.246 norandommap=0 00:17:30.246 numjobs=1 00:17:30.246 00:17:30.246 verify_dump=1 00:17:30.246 verify_backlog=512 00:17:30.246 verify_state_save=0 00:17:30.246 do_verify=1 00:17:30.246 verify=crc32c-intel 00:17:30.246 [job0] 00:17:30.246 filename=/dev/nvme0n1 00:17:30.246 [job1] 00:17:30.246 filename=/dev/nvme0n2 00:17:30.246 [job2] 00:17:30.246 filename=/dev/nvme0n3 00:17:30.246 [job3] 00:17:30.246 filename=/dev/nvme0n4 00:17:30.246 Could not set queue depth (nvme0n1) 00:17:30.246 Could not set queue depth (nvme0n2) 00:17:30.246 Could not set queue depth (nvme0n3) 00:17:30.246 Could not set queue depth (nvme0n4) 00:17:30.246 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.246 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.246 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.246 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:30.246 fio-3.35 00:17:30.246 Starting 4 threads 00:17:31.717 00:17:31.717 job0: (groupid=0, jobs=1): err= 0: pid=3531151: Mon Jul 15 11:29:00 2024 00:17:31.717 read: IOPS=477, BW=1910KiB/s (1956kB/s)(1912KiB/1001msec) 00:17:31.717 slat (nsec): min=25935, max=61214, avg=26980.77, stdev=3117.33 00:17:31.717 clat (usec): min=857, max=1462, avg=1163.78, stdev=82.61 00:17:31.717 lat (usec): min=884, max=1489, avg=1190.77, stdev=82.60 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 955], 5.00th=[ 1004], 10.00th=[ 1057], 20.00th=[ 1106], 00:17:31.717 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:17:31.717 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1287], 00:17:31.717 | 99.00th=[ 1336], 99.50th=[ 1336], 99.90th=[ 1467], 99.95th=[ 1467], 00:17:31.717 | 99.99th=[ 1467] 00:17:31.717 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:31.717 slat (nsec): min=9367, max=56796, avg=32816.98, stdev=8524.13 00:17:31.717 clat (usec): min=409, max=1053, avg=789.10, stdev=103.36 00:17:31.717 lat (usec): min=423, max=1087, avg=821.92, stdev=106.11 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 537], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 709], 00:17:31.717 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 824], 00:17:31.717 | 70.00th=[ 848], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 947], 00:17:31.717 | 99.00th=[ 1012], 99.50th=[ 1045], 99.90th=[ 1057], 99.95th=[ 1057], 00:17:31.717 | 99.99th=[ 1057] 00:17:31.717 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:31.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:31.717 lat (usec) : 500=0.40%, 750=17.78%, 1000=34.65% 00:17:31.717 lat (msec) : 2=47.17% 00:17:31.717 cpu : usr=2.50%, sys=3.60%, ctx=992, majf=0, minf=1 00:17:31.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.717 issued rwts: total=478,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:31.717 job1: (groupid=0, jobs=1): err= 0: pid=3531171: Mon Jul 15 11:29:00 2024 00:17:31.717 read: IOPS=447, BW=1788KiB/s (1831kB/s)(1792KiB/1002msec) 00:17:31.717 slat (nsec): min=6939, max=61800, avg=26944.07, stdev=4020.48 00:17:31.717 clat (usec): min=962, max=1416, avg=1239.56, stdev=60.04 00:17:31.717 lat (usec): min=988, max=1442, avg=1266.50, stdev=60.19 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 1037], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1188], 00:17:31.717 | 30.00th=[ 1221], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1254], 00:17:31.717 | 70.00th=[ 1270], 80.00th=[ 1287], 90.00th=[ 1303], 95.00th=[ 1319], 00:17:31.717 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1418], 99.95th=[ 1418], 00:17:31.717 | 99.99th=[ 1418] 00:17:31.717 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:31.717 slat (nsec): min=9557, max=52488, avg=32693.28, stdev=7955.20 00:17:31.717 clat (usec): min=408, max=1525, avg=795.54, stdev=99.86 00:17:31.717 lat (usec): min=420, max=1558, avg=828.24, stdev=102.33 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 537], 5.00th=[ 627], 10.00th=[ 685], 20.00th=[ 717], 00:17:31.717 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 824], 00:17:31.717 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 914], 95.00th=[ 947], 00:17:31.717 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1532], 99.95th=[ 1532], 00:17:31.717 | 99.99th=[ 1532] 00:17:31.717 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:31.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:31.717 lat (usec) : 500=0.21%, 750=17.40%, 1000=35.42% 00:17:31.717 lat (msec) : 2=46.98% 00:17:31.717 cpu : usr=2.20%, sys=3.70%, ctx=963, majf=0, minf=1 00:17:31.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.717 issued rwts: total=448,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:31.717 job2: (groupid=0, jobs=1): err= 0: pid=3531189: Mon Jul 15 11:29:00 2024 00:17:31.717 read: IOPS=14, BW=59.8KiB/s (61.3kB/s)(60.0KiB/1003msec) 00:17:31.717 slat (nsec): min=24934, max=25324, avg=25093.13, stdev=106.37 00:17:31.717 clat (usec): min=1427, max=42069, avg=39218.02, stdev=10455.56 00:17:31.717 lat (usec): min=1453, max=42095, avg=39243.11, stdev=10455.56 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 1434], 5.00th=[ 1434], 10.00th=[41157], 20.00th=[41681], 00:17:31.717 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:31.717 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:31.717 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:31.717 | 99.99th=[42206] 00:17:31.717 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:17:31.717 slat (nsec): min=10157, max=56304, avg=27127.73, stdev=10086.35 00:17:31.717 clat (usec): min=391, max=1008, avg=773.27, stdev=109.97 00:17:31.717 lat (usec): min=425, max=1041, avg=800.39, stdev=112.60 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 490], 5.00th=[ 611], 10.00th=[ 627], 20.00th=[ 668], 00:17:31.717 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 816], 00:17:31.717 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 938], 00:17:31.717 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1012], 99.95th=[ 1012], 00:17:31.717 | 99.99th=[ 1012] 00:17:31.717 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:31.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:31.717 lat (usec) : 500=1.14%, 750=38.90%, 1000=56.55% 00:17:31.717 lat (msec) : 2=0.76%, 50=2.66% 00:17:31.717 cpu : usr=0.90%, sys=1.20%, ctx=528, majf=0, minf=1 00:17:31.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.717 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:31.717 job3: (groupid=0, jobs=1): err= 0: pid=3531196: Mon Jul 15 11:29:00 2024 00:17:31.717 read: IOPS=14, BW=59.1KiB/s (60.5kB/s)(60.0KiB/1016msec) 00:17:31.717 slat (nsec): min=24352, max=25074, avg=24547.40, stdev=174.04 00:17:31.717 clat (usec): min=1350, max=42095, avg=39211.40, stdev=10475.35 00:17:31.717 lat (usec): min=1375, max=42119, avg=39235.94, stdev=10475.31 00:17:31.717 clat percentiles (usec): 00:17:31.717 | 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[41157], 20.00th=[41681], 00:17:31.717 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:31.717 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:31.717 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:31.717 | 99.99th=[42206] 00:17:31.717 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:31.717 slat (nsec): min=9900, max=50327, avg=29632.35, stdev=7664.55 00:17:31.717 clat (usec): min=504, max=1080, avg=796.94, stdev=100.36 00:17:31.717 lat (usec): min=527, max=1113, avg=826.57, stdev=103.28 00:17:31.717 clat percentiles (usec): 00:17:31.718 | 1.00th=[ 545], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 717], 00:17:31.718 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 799], 60.00th=[ 832], 00:17:31.718 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 955], 00:17:31.718 | 99.00th=[ 1012], 99.50th=[ 1045], 99.90th=[ 1074], 99.95th=[ 1074], 00:17:31.718 | 99.99th=[ 1074] 00:17:31.718 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:31.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:31.718 lat (usec) : 750=30.74%, 1000=65.09% 00:17:31.718 lat (msec) : 2=1.52%, 50=2.66% 00:17:31.718 cpu : usr=0.69%, sys=1.58%, ctx=527, majf=0, minf=1 00:17:31.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:31.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.718 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:31.718 00:17:31.718 Run status group 0 (all jobs): 00:17:31.718 READ: bw=3764KiB/s (3854kB/s), 59.1KiB/s-1910KiB/s (60.5kB/s-1956kB/s), io=3824KiB (3916kB), run=1001-1016msec 00:17:31.718 WRITE: bw=8063KiB/s (8257kB/s), 2016KiB/s-2046KiB/s (2064kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1016msec 00:17:31.718 00:17:31.718 Disk stats (read/write): 00:17:31.718 nvme0n1: ios=360/512, merge=0/0, ticks=1317/334, in_queue=1651, util=95.89% 00:17:31.718 nvme0n2: ios=364/512, merge=0/0, ticks=681/338, in_queue=1019, util=96.82% 00:17:31.718 nvme0n3: ios=32/512, merge=0/0, ticks=1299/399, in_queue=1698, util=96.17% 00:17:31.718 nvme0n4: ios=10/512, merge=0/0, ticks=379/378, in_queue=757, util=89.45% 00:17:31.718 11:29:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:31.718 [global] 00:17:31.718 thread=1 00:17:31.718 invalidate=1 00:17:31.718 rw=randwrite 00:17:31.718 time_based=1 00:17:31.718 runtime=1 00:17:31.718 ioengine=libaio 00:17:31.718 direct=1 00:17:31.718 bs=4096 00:17:31.718 iodepth=1 00:17:31.718 norandommap=0 00:17:31.718 numjobs=1 00:17:31.718 00:17:31.718 verify_dump=1 00:17:31.718 verify_backlog=512 00:17:31.718 verify_state_save=0 00:17:31.718 do_verify=1 00:17:31.718 verify=crc32c-intel 00:17:31.718 [job0] 00:17:31.718 filename=/dev/nvme0n1 00:17:31.718 [job1] 00:17:31.718 filename=/dev/nvme0n2 00:17:31.718 [job2] 00:17:31.718 filename=/dev/nvme0n3 00:17:31.718 [job3] 00:17:31.718 filename=/dev/nvme0n4 00:17:31.718 Could not set queue depth (nvme0n1) 00:17:31.718 Could not set queue depth (nvme0n2) 00:17:31.718 Could not set queue depth (nvme0n3) 00:17:31.718 Could not set queue depth (nvme0n4) 00:17:31.985 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.985 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.985 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.985 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:31.985 fio-3.35 00:17:31.985 Starting 4 threads 00:17:33.400 00:17:33.400 job0: (groupid=0, jobs=1): err= 0: pid=3531623: Mon Jul 15 11:29:01 2024 00:17:33.400 read: IOPS=13, BW=54.8KiB/s (56.2kB/s)(56.0KiB/1021msec) 00:17:33.400 slat (nsec): min=23891, max=24522, avg=24103.50, stdev=176.85 00:17:33.400 clat (usec): min=41594, max=42059, avg=41943.63, stdev=111.43 00:17:33.400 lat (usec): min=41618, max=42084, avg=41967.73, stdev=111.38 00:17:33.400 clat percentiles (usec): 00:17:33.400 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:33.400 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:33.400 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:33.400 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:33.400 | 99.99th=[42206] 00:17:33.400 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:17:33.400 slat (nsec): min=9372, max=53886, avg=28597.71, stdev=7353.12 00:17:33.400 clat (usec): min=491, max=1025, avg=809.22, stdev=91.70 00:17:33.400 lat (usec): min=501, max=1046, avg=837.81, stdev=94.31 00:17:33.400 clat percentiles (usec): 00:17:33.400 | 1.00th=[ 537], 5.00th=[ 644], 10.00th=[ 701], 20.00th=[ 742], 00:17:33.400 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 816], 60.00th=[ 840], 00:17:33.400 | 70.00th=[ 865], 80.00th=[ 889], 90.00th=[ 914], 95.00th=[ 947], 00:17:33.400 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1029], 99.95th=[ 1029], 00:17:33.400 | 99.99th=[ 1029] 00:17:33.400 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.400 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.400 lat (usec) : 500=0.57%, 750=23.57%, 1000=72.81% 00:17:33.400 lat (msec) : 2=0.38%, 50=2.66% 00:17:33.400 cpu : usr=0.49%, sys=1.67%, ctx=526, majf=0, minf=1 00:17:33.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.400 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.400 job1: (groupid=0, jobs=1): err= 0: pid=3531644: Mon Jul 15 11:29:01 2024 00:17:33.400 read: IOPS=300, BW=1201KiB/s (1230kB/s)(1244KiB/1036msec) 00:17:33.400 slat (nsec): min=26212, max=61079, avg=27298.08, stdev=3440.85 00:17:33.400 clat (usec): min=964, max=42331, avg=1930.69, stdev=5588.16 00:17:33.400 lat (usec): min=1003, max=42358, avg=1957.99, stdev=5588.08 00:17:33.400 clat percentiles (usec): 00:17:33.400 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1106], 00:17:33.400 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:17:33.400 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1254], 00:17:33.400 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:33.400 | 99.99th=[42206] 00:17:33.400 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:17:33.400 slat (nsec): min=9084, max=69553, avg=31841.06, stdev=7887.55 00:17:33.400 clat (usec): min=253, max=2186, avg=784.27, stdev=142.57 00:17:33.400 lat (usec): min=262, max=2224, avg=816.11, stdev=144.87 00:17:33.400 clat percentiles (usec): 00:17:33.400 | 1.00th=[ 478], 5.00th=[ 545], 10.00th=[ 611], 20.00th=[ 685], 00:17:33.400 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 791], 60.00th=[ 824], 00:17:33.400 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 947], 95.00th=[ 988], 00:17:33.400 | 99.00th=[ 1045], 99.50th=[ 1106], 99.90th=[ 2180], 99.95th=[ 2180], 00:17:33.400 | 99.99th=[ 2180] 00:17:33.400 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.400 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.400 lat (usec) : 500=1.34%, 750=21.99%, 1000=37.55% 00:17:33.400 lat (msec) : 2=38.27%, 4=0.12%, 50=0.73% 00:17:33.400 cpu : usr=2.13%, sys=2.80%, ctx=824, majf=0, minf=1 00:17:33.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.400 issued rwts: total=311,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.400 job2: (groupid=0, jobs=1): err= 0: pid=3531662: Mon Jul 15 11:29:01 2024 00:17:33.400 read: IOPS=205, BW=823KiB/s (843kB/s)(824KiB/1001msec) 00:17:33.400 slat (nsec): min=26422, max=44520, avg=27350.33, stdev=2665.27 00:17:33.400 clat (usec): min=832, max=42252, avg=2682.66, stdev=7909.64 00:17:33.400 lat (usec): min=859, max=42279, avg=2710.01, stdev=7909.52 00:17:33.400 clat percentiles (usec): 00:17:33.400 | 1.00th=[ 930], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1057], 00:17:33.400 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:17:33.400 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1237], 95.00th=[ 1319], 00:17:33.400 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:33.400 | 99.99th=[42206] 00:17:33.400 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:33.400 slat (nsec): min=9357, max=54030, avg=32721.83, stdev=6868.32 00:17:33.400 clat (usec): min=364, max=1149, avg=817.96, stdev=122.94 00:17:33.400 lat (usec): min=375, max=1183, avg=850.68, stdev=124.91 00:17:33.400 clat percentiles (usec): 00:17:33.400 | 1.00th=[ 519], 5.00th=[ 611], 10.00th=[ 668], 20.00th=[ 709], 00:17:33.400 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 848], 00:17:33.400 | 70.00th=[ 889], 80.00th=[ 922], 90.00th=[ 979], 95.00th=[ 1004], 00:17:33.400 | 99.00th=[ 1090], 99.50th=[ 1090], 99.90th=[ 1156], 99.95th=[ 1156], 00:17:33.400 | 99.99th=[ 1156] 00:17:33.400 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.400 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.400 lat (usec) : 500=0.56%, 750=20.89%, 1000=47.35% 00:17:33.401 lat (msec) : 2=30.08%, 50=1.11% 00:17:33.401 cpu : usr=2.20%, sys=2.40%, ctx=720, majf=0, minf=1 00:17:33.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.401 issued rwts: total=206,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.401 job3: (groupid=0, jobs=1): err= 0: pid=3531669: Mon Jul 15 11:29:01 2024 00:17:33.401 read: IOPS=14, BW=58.9KiB/s (60.3kB/s)(60.0KiB/1019msec) 00:17:33.401 slat (nsec): min=10409, max=25524, avg=24098.60, stdev=3790.66 00:17:33.401 clat (usec): min=1323, max=42035, avg=39214.47, stdev=10484.06 00:17:33.401 lat (usec): min=1333, max=42060, avg=39238.57, stdev=10487.85 00:17:33.401 clat percentiles (usec): 00:17:33.401 | 1.00th=[ 1319], 5.00th=[ 1319], 10.00th=[41157], 20.00th=[41681], 00:17:33.401 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:33.401 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:33.401 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:33.401 | 99.99th=[42206] 00:17:33.401 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:17:33.401 slat (nsec): min=9798, max=52659, avg=30306.30, stdev=7455.57 00:17:33.401 clat (usec): min=394, max=1211, avg=800.96, stdev=116.96 00:17:33.401 lat (usec): min=405, max=1242, avg=831.26, stdev=119.08 00:17:33.401 clat percentiles (usec): 00:17:33.401 | 1.00th=[ 494], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 709], 00:17:33.401 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 840], 00:17:33.401 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 971], 00:17:33.401 | 99.00th=[ 1045], 99.50th=[ 1090], 99.90th=[ 1205], 99.95th=[ 1205], 00:17:33.401 | 99.99th=[ 1205] 00:17:33.401 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:33.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:33.401 lat (usec) : 500=1.33%, 750=29.03%, 1000=64.33% 00:17:33.401 lat (msec) : 2=2.66%, 50=2.66% 00:17:33.401 cpu : usr=0.79%, sys=1.47%, ctx=528, majf=0, minf=1 00:17:33.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.401 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.401 00:17:33.401 Run status group 0 (all jobs): 00:17:33.401 READ: bw=2108KiB/s (2159kB/s), 54.8KiB/s-1201KiB/s (56.2kB/s-1230kB/s), io=2184KiB (2236kB), run=1001-1036msec 00:17:33.401 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2046KiB/s (2024kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1036msec 00:17:33.401 00:17:33.401 Disk stats (read/write): 00:17:33.401 nvme0n1: ios=59/512, merge=0/0, ticks=440/386, in_queue=826, util=86.97% 00:17:33.401 nvme0n2: ios=249/512, merge=0/0, ticks=963/327, in_queue=1290, util=97.03% 00:17:33.401 nvme0n3: ios=190/512, merge=0/0, ticks=1290/332, in_queue=1622, util=96.62% 00:17:33.401 nvme0n4: ios=67/512, merge=0/0, ticks=956/384, in_queue=1340, util=97.01% 00:17:33.401 11:29:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:33.401 [global] 00:17:33.401 thread=1 00:17:33.401 invalidate=1 00:17:33.401 rw=write 00:17:33.401 time_based=1 00:17:33.401 runtime=1 00:17:33.401 ioengine=libaio 00:17:33.401 direct=1 00:17:33.401 bs=4096 00:17:33.401 iodepth=128 00:17:33.401 norandommap=0 00:17:33.401 numjobs=1 00:17:33.401 00:17:33.401 verify_dump=1 00:17:33.401 verify_backlog=512 00:17:33.401 verify_state_save=0 00:17:33.401 do_verify=1 00:17:33.401 verify=crc32c-intel 00:17:33.401 [job0] 00:17:33.401 filename=/dev/nvme0n1 00:17:33.401 [job1] 00:17:33.401 filename=/dev/nvme0n2 00:17:33.401 [job2] 00:17:33.401 filename=/dev/nvme0n3 00:17:33.401 [job3] 00:17:33.401 filename=/dev/nvme0n4 00:17:33.401 Could not set queue depth (nvme0n1) 00:17:33.401 Could not set queue depth (nvme0n2) 00:17:33.401 Could not set queue depth (nvme0n3) 00:17:33.401 Could not set queue depth (nvme0n4) 00:17:33.667 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.667 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.667 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.667 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:33.667 fio-3.35 00:17:33.667 Starting 4 threads 00:17:35.068 00:17:35.068 job0: (groupid=0, jobs=1): err= 0: pid=3532113: Mon Jul 15 11:29:03 2024 00:17:35.068 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:17:35.068 slat (nsec): min=877, max=29614k, avg=140149.23, stdev=1162488.32 00:17:35.068 clat (usec): min=4957, max=55286, avg=17253.31, stdev=9126.97 00:17:35.068 lat (usec): min=4987, max=55310, avg=17393.45, stdev=9219.89 00:17:35.068 clat percentiles (usec): 00:17:35.068 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[10159], 00:17:35.068 | 30.00th=[11994], 40.00th=[13304], 50.00th=[14091], 60.00th=[16057], 00:17:35.068 | 70.00th=[19268], 80.00th=[25560], 90.00th=[31851], 95.00th=[34341], 00:17:35.068 | 99.00th=[43779], 99.50th=[46400], 99.90th=[48497], 99.95th=[50070], 00:17:35.068 | 99.99th=[55313] 00:17:35.068 write: IOPS=4018, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec); 0 zone resets 00:17:35.068 slat (nsec): min=1573, max=17029k, avg=111369.26, stdev=807203.20 00:17:35.068 clat (usec): min=704, max=71187, avg=16268.29, stdev=12190.23 00:17:35.068 lat (usec): min=712, max=71189, avg=16379.66, stdev=12259.02 00:17:35.068 clat percentiles (usec): 00:17:35.068 | 1.00th=[ 1745], 5.00th=[ 4490], 10.00th=[ 6521], 20.00th=[ 7701], 00:17:35.068 | 30.00th=[ 8979], 40.00th=[10421], 50.00th=[13304], 60.00th=[15533], 00:17:35.068 | 70.00th=[17433], 80.00th=[22676], 90.00th=[32113], 95.00th=[37487], 00:17:35.068 | 99.00th=[68682], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:17:35.068 | 99.99th=[70779] 00:17:35.068 bw ( KiB/s): min=15344, max=15952, per=18.65%, avg=15648.00, stdev=429.92, samples=2 00:17:35.068 iops : min= 3836, max= 3988, avg=3912.00, stdev=107.48, samples=2 00:17:35.068 lat (usec) : 750=0.10%, 1000=0.01% 00:17:35.068 lat (msec) : 2=0.72%, 4=1.63%, 10=25.70%, 20=46.39%, 50=23.76% 00:17:35.068 lat (msec) : 100=1.69% 00:17:35.068 cpu : usr=2.49%, sys=4.58%, ctx=258, majf=0, minf=1 00:17:35.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:35.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.068 issued rwts: total=3584,4039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.068 job1: (groupid=0, jobs=1): err= 0: pid=3532122: Mon Jul 15 11:29:03 2024 00:17:35.068 read: IOPS=5599, BW=21.9MiB/s (22.9MB/s)(22.8MiB/1043msec) 00:17:35.068 slat (nsec): min=891, max=12622k, avg=77875.21, stdev=526781.75 00:17:35.068 clat (usec): min=3301, max=48896, avg=10767.76, stdev=6913.77 00:17:35.068 lat (usec): min=3302, max=50553, avg=10845.64, stdev=6937.90 00:17:35.068 clat percentiles (usec): 00:17:35.068 | 1.00th=[ 5145], 5.00th=[ 5932], 10.00th=[ 6980], 20.00th=[ 7898], 00:17:35.068 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:17:35.068 | 70.00th=[ 9372], 80.00th=[11469], 90.00th=[17695], 95.00th=[23200], 00:17:35.068 | 99.00th=[46400], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:17:35.068 | 99.99th=[49021] 00:17:35.068 write: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1043msec); 0 zone resets 00:17:35.068 slat (nsec): min=1561, max=29964k, avg=84679.13, stdev=700513.00 00:17:35.068 clat (usec): min=2980, max=49796, avg=11268.69, stdev=6473.04 00:17:35.068 lat (usec): min=2982, max=49809, avg=11353.37, stdev=6519.58 00:17:35.068 clat percentiles (usec): 00:17:35.068 | 1.00th=[ 4752], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 7963], 00:17:35.068 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 9110], 00:17:35.068 | 70.00th=[ 9896], 80.00th=[13566], 90.00th=[19792], 95.00th=[27657], 00:17:35.068 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:17:35.068 | 99.99th=[49546] 00:17:35.068 bw ( KiB/s): min=20480, max=28672, per=29.29%, avg=24576.00, stdev=5792.62, samples=2 00:17:35.068 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:17:35.068 lat (msec) : 4=0.38%, 10=72.24%, 20=19.82%, 50=7.56% 00:17:35.068 cpu : usr=2.50%, sys=4.13%, ctx=549, majf=0, minf=1 00:17:35.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:35.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.068 issued rwts: total=5840,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.068 job2: (groupid=0, jobs=1): err= 0: pid=3532137: Mon Jul 15 11:29:03 2024 00:17:35.068 read: IOPS=7442, BW=29.1MiB/s (30.5MB/s)(29.1MiB/1002msec) 00:17:35.068 slat (nsec): min=903, max=9406.3k, avg=62182.62, stdev=465264.70 00:17:35.068 clat (usec): min=1112, max=21507, avg=8844.79, stdev=2796.16 00:17:35.068 lat (usec): min=3820, max=21514, avg=8906.97, stdev=2818.79 00:17:35.068 clat percentiles (usec): 00:17:35.068 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6718], 00:17:35.068 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8717], 00:17:35.068 | 70.00th=[ 9634], 80.00th=[10945], 90.00th=[12911], 95.00th=[14222], 00:17:35.068 | 99.00th=[17957], 99.50th=[19268], 99.90th=[21103], 99.95th=[21103], 00:17:35.068 | 99.99th=[21627] 00:17:35.068 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:17:35.068 slat (nsec): min=1587, max=8234.6k, avg=53883.55, stdev=341704.00 00:17:35.068 clat (usec): min=715, max=22890, avg=7956.18, stdev=3221.75 00:17:35.068 lat (usec): min=743, max=22899, avg=8010.06, stdev=3236.89 00:17:35.068 clat percentiles (usec): 00:17:35.068 | 1.00th=[ 2278], 5.00th=[ 4113], 10.00th=[ 4490], 20.00th=[ 5407], 00:17:35.068 | 30.00th=[ 6128], 40.00th=[ 6718], 50.00th=[ 7308], 60.00th=[ 7767], 00:17:35.068 | 70.00th=[ 8979], 80.00th=[10290], 90.00th=[12387], 95.00th=[14222], 00:17:35.068 | 99.00th=[18744], 99.50th=[19530], 99.90th=[21103], 99.95th=[21890], 00:17:35.068 | 99.99th=[22938] 00:17:35.068 bw ( KiB/s): min=25376, max=36064, per=36.62%, avg=30720.00, stdev=7557.56, samples=2 00:17:35.068 iops : min= 6344, max= 9016, avg=7680.00, stdev=1889.39, samples=2 00:17:35.068 lat (usec) : 750=0.03% 00:17:35.068 lat (msec) : 2=0.34%, 4=1.98%, 10=72.76%, 20=24.67%, 50=0.22% 00:17:35.069 cpu : usr=4.70%, sys=6.89%, ctx=676, majf=0, minf=1 00:17:35.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:35.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.069 issued rwts: total=7457,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.069 job3: (groupid=0, jobs=1): err= 0: pid=3532144: Mon Jul 15 11:29:03 2024 00:17:35.069 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:17:35.069 slat (nsec): min=885, max=26131k, avg=135354.94, stdev=1000460.64 00:17:35.069 clat (usec): min=6426, max=65325, avg=16888.98, stdev=7870.16 00:17:35.069 lat (usec): min=6432, max=70074, avg=17024.34, stdev=7977.06 00:17:35.069 clat percentiles (usec): 00:17:35.069 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10421], 00:17:35.069 | 30.00th=[11863], 40.00th=[13960], 50.00th=[15270], 60.00th=[16581], 00:17:35.069 | 70.00th=[19006], 80.00th=[21890], 90.00th=[25035], 95.00th=[31065], 00:17:35.069 | 99.00th=[46400], 99.50th=[50594], 99.90th=[65274], 99.95th=[65274], 00:17:35.069 | 99.99th=[65274] 00:17:35.069 write: IOPS=3989, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1006msec); 0 zone resets 00:17:35.069 slat (nsec): min=1517, max=27093k, avg=109184.81, stdev=860081.46 00:17:35.069 clat (usec): min=809, max=90783, avg=16721.33, stdev=13336.61 00:17:35.069 lat (usec): min=1207, max=90789, avg=16830.51, stdev=13401.34 00:17:35.069 clat percentiles (usec): 00:17:35.069 | 1.00th=[ 5997], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9503], 00:17:35.069 | 30.00th=[10159], 40.00th=[11076], 50.00th=[12256], 60.00th=[14091], 00:17:35.069 | 70.00th=[17433], 80.00th=[21627], 90.00th=[26346], 95.00th=[32637], 00:17:35.069 | 99.00th=[87557], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:17:35.069 | 99.99th=[90702] 00:17:35.069 bw ( KiB/s): min=12304, max=18776, per=18.52%, avg=15540.00, stdev=4576.40, samples=2 00:17:35.069 iops : min= 3076, max= 4694, avg=3885.00, stdev=1144.10, samples=2 00:17:35.069 lat (usec) : 1000=0.01% 00:17:35.069 lat (msec) : 2=0.16%, 4=0.11%, 10=21.72%, 20=53.03%, 50=22.89% 00:17:35.069 lat (msec) : 100=2.08% 00:17:35.069 cpu : usr=3.18%, sys=2.89%, ctx=394, majf=0, minf=1 00:17:35.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:35.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.069 issued rwts: total=3584,4013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.069 00:17:35.069 Run status group 0 (all jobs): 00:17:35.069 READ: bw=76.6MiB/s (80.4MB/s), 13.9MiB/s-29.1MiB/s (14.6MB/s-30.5MB/s), io=79.9MiB (83.8MB), run=1002-1043msec 00:17:35.069 WRITE: bw=81.9MiB/s (85.9MB/s), 15.6MiB/s-29.9MiB/s (16.3MB/s-31.4MB/s), io=85.5MiB (89.6MB), run=1002-1043msec 00:17:35.069 00:17:35.069 Disk stats (read/write): 00:17:35.069 nvme0n1: ios=3100/3584, merge=0/0, ticks=43083/51608, in_queue=94691, util=98.40% 00:17:35.069 nvme0n2: ios=4920/5120, merge=0/0, ticks=26319/40813, in_queue=67132, util=98.88% 00:17:35.069 nvme0n3: ios=6714/6799, merge=0/0, ticks=51458/47251, in_queue=98709, util=98.12% 00:17:35.069 nvme0n4: ios=2851/3072, merge=0/0, ticks=40713/40628, in_queue=81341, util=89.58% 00:17:35.069 11:29:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:35.069 [global] 00:17:35.069 thread=1 00:17:35.069 invalidate=1 00:17:35.069 rw=randwrite 00:17:35.069 time_based=1 00:17:35.069 runtime=1 00:17:35.069 ioengine=libaio 00:17:35.069 direct=1 00:17:35.069 bs=4096 00:17:35.069 iodepth=128 00:17:35.069 norandommap=0 00:17:35.069 numjobs=1 00:17:35.069 00:17:35.069 verify_dump=1 00:17:35.069 verify_backlog=512 00:17:35.069 verify_state_save=0 00:17:35.069 do_verify=1 00:17:35.069 verify=crc32c-intel 00:17:35.069 [job0] 00:17:35.069 filename=/dev/nvme0n1 00:17:35.069 [job1] 00:17:35.069 filename=/dev/nvme0n2 00:17:35.069 [job2] 00:17:35.069 filename=/dev/nvme0n3 00:17:35.069 [job3] 00:17:35.069 filename=/dev/nvme0n4 00:17:35.069 Could not set queue depth (nvme0n1) 00:17:35.069 Could not set queue depth (nvme0n2) 00:17:35.069 Could not set queue depth (nvme0n3) 00:17:35.069 Could not set queue depth (nvme0n4) 00:17:35.331 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.331 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.331 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.331 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:35.331 fio-3.35 00:17:35.331 Starting 4 threads 00:17:36.733 00:17:36.733 job0: (groupid=0, jobs=1): err= 0: pid=3532603: Mon Jul 15 11:29:05 2024 00:17:36.733 read: IOPS=4688, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1010msec) 00:17:36.733 slat (nsec): min=896, max=11458k, avg=100734.95, stdev=721185.37 00:17:36.733 clat (usec): min=5370, max=38833, avg=13081.40, stdev=3633.63 00:17:36.733 lat (usec): min=5375, max=38841, avg=13182.14, stdev=3679.52 00:17:36.733 clat percentiles (usec): 00:17:36.733 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10421], 00:17:36.733 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12387], 60.00th=[13435], 00:17:36.733 | 70.00th=[14353], 80.00th=[15533], 90.00th=[17171], 95.00th=[18744], 00:17:36.733 | 99.00th=[26084], 99.50th=[27657], 99.90th=[39060], 99.95th=[39060], 00:17:36.733 | 99.99th=[39060] 00:17:36.733 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:17:36.733 slat (nsec): min=1535, max=11116k, avg=97877.96, stdev=626923.90 00:17:36.733 clat (usec): min=1139, max=78862, avg=12929.85, stdev=10171.03 00:17:36.733 lat (usec): min=1149, max=78870, avg=13027.73, stdev=10229.43 00:17:36.733 clat percentiles (usec): 00:17:36.733 | 1.00th=[ 3589], 5.00th=[ 5735], 10.00th=[ 6783], 20.00th=[ 7570], 00:17:36.733 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[10683], 00:17:36.733 | 70.00th=[12518], 80.00th=[15139], 90.00th=[22152], 95.00th=[29754], 00:17:36.733 | 99.00th=[66323], 99.50th=[70779], 99.90th=[79168], 99.95th=[79168], 00:17:36.733 | 99.99th=[79168] 00:17:36.733 bw ( KiB/s): min=19560, max=21392, per=22.18%, avg=20476.00, stdev=1295.42, samples=2 00:17:36.733 iops : min= 4890, max= 5348, avg=5119.00, stdev=323.85, samples=2 00:17:36.733 lat (msec) : 2=0.10%, 4=0.60%, 10=32.91%, 20=57.83%, 50=7.52% 00:17:36.733 lat (msec) : 100=1.05% 00:17:36.733 cpu : usr=3.47%, sys=5.15%, ctx=364, majf=0, minf=1 00:17:36.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:36.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.733 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.733 job1: (groupid=0, jobs=1): err= 0: pid=3532623: Mon Jul 15 11:29:05 2024 00:17:36.733 read: IOPS=5902, BW=23.1MiB/s (24.2MB/s)(23.3MiB/1012msec) 00:17:36.733 slat (nsec): min=881, max=28564k, avg=84830.69, stdev=825992.11 00:17:36.733 clat (usec): min=1374, max=88303, avg=11858.04, stdev=11665.16 00:17:36.733 lat (usec): min=2670, max=89666, avg=11942.87, stdev=11729.11 00:17:36.733 clat percentiles (usec): 00:17:36.733 | 1.00th=[ 4228], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6521], 00:17:36.733 | 30.00th=[ 7111], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8979], 00:17:36.733 | 70.00th=[ 9634], 80.00th=[11994], 90.00th=[20055], 95.00th=[40109], 00:17:36.733 | 99.00th=[69731], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:17:36.733 | 99.99th=[88605] 00:17:36.733 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec); 0 zone resets 00:17:36.733 slat (nsec): min=1477, max=16657k, avg=76858.30, stdev=597857.94 00:17:36.733 clat (usec): min=1116, max=55988, avg=9355.67, stdev=5901.69 00:17:36.733 lat (usec): min=1125, max=56447, avg=9432.53, stdev=5962.34 00:17:36.733 clat percentiles (usec): 00:17:36.733 | 1.00th=[ 3032], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5866], 00:17:36.733 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7635], 60.00th=[ 8717], 00:17:36.733 | 70.00th=[ 9765], 80.00th=[11994], 90.00th=[14353], 95.00th=[19530], 00:17:36.733 | 99.00th=[41681], 99.50th=[41681], 99.90th=[52167], 99.95th=[52167], 00:17:36.733 | 99.99th=[55837] 00:17:36.733 bw ( KiB/s): min=16384, max=32768, per=26.62%, avg=24576.00, stdev=11585.24, samples=2 00:17:36.733 iops : min= 4096, max= 8192, avg=6144.00, stdev=2896.31, samples=2 00:17:36.733 lat (msec) : 2=0.11%, 4=1.87%, 10=70.66%, 20=20.19%, 50=5.45% 00:17:36.733 lat (msec) : 100=1.72% 00:17:36.733 cpu : usr=3.76%, sys=5.54%, ctx=375, majf=0, minf=1 00:17:36.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:36.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.733 issued rwts: total=5973,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.733 job2: (groupid=0, jobs=1): err= 0: pid=3532637: Mon Jul 15 11:29:05 2024 00:17:36.733 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:17:36.733 slat (nsec): min=908, max=24158k, avg=80783.48, stdev=814398.95 00:17:36.733 clat (msec): min=3, max=104, avg=12.24, stdev= 9.58 00:17:36.733 lat (msec): min=3, max=121, avg=12.32, stdev= 9.66 00:17:36.733 clat percentiles (msec): 00:17:36.733 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:17:36.733 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 11], 00:17:36.733 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 24], 95.00th=[ 39], 00:17:36.733 | 99.00th=[ 45], 99.50th=[ 46], 99.90th=[ 105], 99.95th=[ 105], 00:17:36.733 | 99.99th=[ 105] 00:17:36.733 write: IOPS=5888, BW=23.0MiB/s (24.1MB/s)(23.2MiB/1010msec); 0 zone resets 00:17:36.733 slat (nsec): min=1556, max=15877k, avg=68061.92, stdev=580128.78 00:17:36.733 clat (usec): min=997, max=77176, avg=9945.72, stdev=8163.27 00:17:36.733 lat (usec): min=1299, max=77182, avg=10013.78, stdev=8194.43 00:17:36.733 clat percentiles (usec): 00:17:36.733 | 1.00th=[ 2737], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 6063], 00:17:36.733 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8455], 00:17:36.733 | 70.00th=[ 8979], 80.00th=[10683], 90.00th=[14615], 95.00th=[24773], 00:17:36.733 | 99.00th=[46400], 99.50th=[64226], 99.90th=[77071], 99.95th=[77071], 00:17:36.733 | 99.99th=[77071] 00:17:36.733 bw ( KiB/s): min=21976, max=24576, per=25.21%, avg=23276.00, stdev=1838.48, samples=2 00:17:36.733 iops : min= 5494, max= 6144, avg=5819.00, stdev=459.62, samples=2 00:17:36.733 lat (usec) : 1000=0.01% 00:17:36.733 lat (msec) : 2=0.14%, 4=1.92%, 10=66.94%, 20=21.59%, 50=8.90% 00:17:36.733 lat (msec) : 100=0.43%, 250=0.08% 00:17:36.733 cpu : usr=3.57%, sys=6.34%, ctx=441, majf=0, minf=1 00:17:36.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:36.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.733 issued rwts: total=5632,5947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.734 job3: (groupid=0, jobs=1): err= 0: pid=3532639: Mon Jul 15 11:29:05 2024 00:17:36.734 read: IOPS=5478, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1003msec) 00:17:36.734 slat (nsec): min=910, max=10779k, avg=73176.28, stdev=565714.00 00:17:36.734 clat (usec): min=1061, max=34677, avg=10117.11, stdev=3210.43 00:17:36.734 lat (usec): min=4306, max=34684, avg=10190.29, stdev=3249.27 00:17:36.734 clat percentiles (usec): 00:17:36.734 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 7504], 00:17:36.734 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10290], 00:17:36.734 | 70.00th=[11207], 80.00th=[12518], 90.00th=[13960], 95.00th=[15139], 00:17:36.734 | 99.00th=[22152], 99.50th=[26346], 99.90th=[31327], 99.95th=[34866], 00:17:36.734 | 99.99th=[34866] 00:17:36.734 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:17:36.734 slat (nsec): min=1497, max=43578k, avg=80564.43, stdev=787788.62 00:17:36.734 clat (usec): min=786, max=113680, avg=11665.80, stdev=14603.00 00:17:36.734 lat (usec): min=816, max=117213, avg=11746.36, stdev=14705.59 00:17:36.734 clat percentiles (usec): 00:17:36.734 | 1.00th=[ 1614], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 5997], 00:17:36.734 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 8029], 60.00th=[ 8717], 00:17:36.734 | 70.00th=[ 10290], 80.00th=[ 11731], 90.00th=[ 15401], 95.00th=[ 28705], 00:17:36.734 | 99.00th=[ 90702], 99.50th=[106431], 99.90th=[111674], 99.95th=[113771], 00:17:36.734 | 99.99th=[113771] 00:17:36.734 bw ( KiB/s): min=23160, max=25992, per=26.62%, avg=24576.00, stdev=2002.53, samples=2 00:17:36.734 iops : min= 5790, max= 6498, avg=6144.00, stdev=500.63, samples=2 00:17:36.734 lat (usec) : 1000=0.04% 00:17:36.734 lat (msec) : 2=0.64%, 4=1.37%, 10=60.98%, 20=32.15%, 50=2.98% 00:17:36.734 lat (msec) : 100=1.49%, 250=0.34% 00:17:36.734 cpu : usr=4.59%, sys=5.49%, ctx=463, majf=0, minf=1 00:17:36.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:36.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.734 issued rwts: total=5495,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.734 00:17:36.734 Run status group 0 (all jobs): 00:17:36.734 READ: bw=84.3MiB/s (88.4MB/s), 18.3MiB/s-23.1MiB/s (19.2MB/s-24.2MB/s), io=85.3MiB (89.4MB), run=1003-1012msec 00:17:36.734 WRITE: bw=90.1MiB/s (94.5MB/s), 19.8MiB/s-23.9MiB/s (20.8MB/s-25.1MB/s), io=91.2MiB (95.7MB), run=1003-1012msec 00:17:36.734 00:17:36.734 Disk stats (read/write): 00:17:36.734 nvme0n1: ios=3760/4096, merge=0/0, ticks=47529/55313, in_queue=102842, util=90.88% 00:17:36.734 nvme0n2: ios=5668/5731, merge=0/0, ticks=45644/42666, in_queue=88310, util=87.77% 00:17:36.734 nvme0n3: ios=5138/5464, merge=0/0, ticks=48851/43431, in_queue=92282, util=95.78% 00:17:36.734 nvme0n4: ios=4248/5120, merge=0/0, ticks=34077/43655, in_queue=77732, util=96.58% 00:17:36.734 11:29:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:36.734 11:29:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3532799 00:17:36.734 11:29:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:36.734 11:29:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:36.734 [global] 00:17:36.734 thread=1 00:17:36.734 invalidate=1 00:17:36.734 rw=read 00:17:36.734 time_based=1 00:17:36.734 runtime=10 00:17:36.734 ioengine=libaio 00:17:36.734 direct=1 00:17:36.734 bs=4096 00:17:36.734 iodepth=1 00:17:36.734 norandommap=1 00:17:36.734 numjobs=1 00:17:36.734 00:17:36.734 [job0] 00:17:36.734 filename=/dev/nvme0n1 00:17:36.734 [job1] 00:17:36.734 filename=/dev/nvme0n2 00:17:36.734 [job2] 00:17:36.734 filename=/dev/nvme0n3 00:17:36.734 [job3] 00:17:36.734 filename=/dev/nvme0n4 00:17:36.734 Could not set queue depth (nvme0n1) 00:17:36.734 Could not set queue depth (nvme0n2) 00:17:36.734 Could not set queue depth (nvme0n3) 00:17:36.734 Could not set queue depth (nvme0n4) 00:17:36.999 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.999 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.999 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.999 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.999 fio-3.35 00:17:36.999 Starting 4 threads 00:17:39.523 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:39.780 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8617984, buflen=4096 00:17:39.780 fio: pid=3533116, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:39.780 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:39.780 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8581120, buflen=4096 00:17:39.780 fio: pid=3533109, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:39.780 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:39.780 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:40.038 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8916992, buflen=4096 00:17:40.038 fio: pid=3533068, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:40.038 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.038 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:40.295 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.295 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:40.295 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=307200, buflen=4096 00:17:40.295 fio: pid=3533087, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:40.295 00:17:40.295 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3533068: Mon Jul 15 11:29:08 2024 00:17:40.295 read: IOPS=749, BW=2996KiB/s (3067kB/s)(8708KiB/2907msec) 00:17:40.295 slat (usec): min=8, max=7737, avg=32.31, stdev=219.59 00:17:40.295 clat (usec): min=961, max=6928, avg=1286.57, stdev=134.52 00:17:40.295 lat (usec): min=986, max=8959, avg=1318.89, stdev=256.17 00:17:40.295 clat percentiles (usec): 00:17:40.295 | 1.00th=[ 1139], 5.00th=[ 1188], 10.00th=[ 1205], 20.00th=[ 1237], 00:17:40.295 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1303], 00:17:40.295 | 70.00th=[ 1319], 80.00th=[ 1336], 90.00th=[ 1352], 95.00th=[ 1369], 00:17:40.295 | 99.00th=[ 1418], 99.50th=[ 1434], 99.90th=[ 1500], 99.95th=[ 1516], 00:17:40.295 | 99.99th=[ 6915] 00:17:40.295 bw ( KiB/s): min= 3016, max= 3040, per=36.33%, avg=3032.00, stdev= 9.80, samples=5 00:17:40.295 iops : min= 754, max= 760, avg=758.00, stdev= 2.45, samples=5 00:17:40.295 lat (usec) : 1000=0.09% 00:17:40.295 lat (msec) : 2=99.82%, 10=0.05% 00:17:40.295 cpu : usr=1.10%, sys=3.17%, ctx=2181, majf=0, minf=1 00:17:40.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.295 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.295 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:40.295 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3533087: Mon Jul 15 11:29:08 2024 00:17:40.295 read: IOPS=24, BW=97.0KiB/s (99.4kB/s)(300KiB/3092msec) 00:17:40.295 slat (usec): min=7, max=1620, avg=47.15, stdev=182.92 00:17:40.295 clat (usec): min=645, max=43119, avg=40884.14, stdev=6658.76 00:17:40.295 lat (usec): min=703, max=43151, avg=40931.57, stdev=6659.32 00:17:40.295 clat percentiles (usec): 00:17:40.295 | 1.00th=[ 644], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:40.295 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:40.295 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:40.295 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:40.295 | 99.99th=[43254] 00:17:40.295 bw ( KiB/s): min= 95, max= 104, per=1.16%, avg=97.17, stdev= 3.37, samples=6 00:17:40.295 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:17:40.295 lat (usec) : 750=1.32% 00:17:40.295 lat (msec) : 2=1.32%, 50=96.05% 00:17:40.295 cpu : usr=0.00%, sys=0.13%, ctx=78, majf=0, minf=1 00:17:40.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.295 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.295 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:40.295 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3533109: Mon Jul 15 11:29:08 2024 00:17:40.295 read: IOPS=764, BW=3058KiB/s (3132kB/s)(8380KiB/2740msec) 00:17:40.295 slat (usec): min=6, max=21169, avg=38.48, stdev=485.53 00:17:40.295 clat (usec): min=653, max=12818, avg=1252.39, stdev=350.39 00:17:40.295 lat (usec): min=661, max=22193, avg=1290.87, stdev=594.23 00:17:40.295 clat percentiles (usec): 00:17:40.295 | 1.00th=[ 848], 5.00th=[ 1074], 10.00th=[ 1139], 20.00th=[ 1188], 00:17:40.295 | 30.00th=[ 1221], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270], 00:17:40.295 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1352], 00:17:40.295 | 99.00th=[ 1401], 99.50th=[ 1418], 99.90th=[ 1483], 99.95th=[11469], 00:17:40.295 | 99.99th=[12780] 00:17:40.295 bw ( KiB/s): min= 3096, max= 3120, per=37.23%, avg=3107.20, stdev=12.13, samples=5 00:17:40.295 iops : min= 774, max= 780, avg=776.80, stdev= 3.03, samples=5 00:17:40.295 lat (usec) : 750=0.24%, 1000=2.81% 00:17:40.295 lat (msec) : 2=96.80%, 20=0.10% 00:17:40.295 cpu : usr=0.91%, sys=2.23%, ctx=2098, majf=0, minf=1 00:17:40.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.296 issued rwts: total=2096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:40.296 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3533116: Mon Jul 15 11:29:08 2024 00:17:40.296 read: IOPS=824, BW=3295KiB/s (3374kB/s)(8416KiB/2554msec) 00:17:40.296 slat (nsec): min=7447, max=58627, avg=26649.90, stdev=3035.54 00:17:40.296 clat (usec): min=760, max=1428, avg=1175.96, stdev=85.46 00:17:40.296 lat (usec): min=796, max=1454, avg=1202.61, stdev=85.43 00:17:40.296 clat percentiles (usec): 00:17:40.296 | 1.00th=[ 930], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1106], 00:17:40.296 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:17:40.296 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1270], 95.00th=[ 1303], 00:17:40.296 | 99.00th=[ 1352], 99.50th=[ 1369], 99.90th=[ 1401], 99.95th=[ 1418], 00:17:40.296 | 99.99th=[ 1434] 00:17:40.296 bw ( KiB/s): min= 3272, max= 3328, per=39.66%, avg=3310.40, stdev=22.20, samples=5 00:17:40.296 iops : min= 818, max= 832, avg=827.60, stdev= 5.55, samples=5 00:17:40.296 lat (usec) : 1000=3.33% 00:17:40.296 lat (msec) : 2=96.63% 00:17:40.296 cpu : usr=1.41%, sys=3.37%, ctx=2105, majf=0, minf=2 00:17:40.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.296 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.296 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:40.296 00:17:40.296 Run status group 0 (all jobs): 00:17:40.296 READ: bw=8345KiB/s (8546kB/s), 97.0KiB/s-3295KiB/s (99.4kB/s-3374kB/s), io=25.2MiB (26.4MB), run=2554-3092msec 00:17:40.296 00:17:40.296 Disk stats (read/write): 00:17:40.296 nvme0n1: ios=2145/0, merge=0/0, ticks=2492/0, in_queue=2492, util=94.36% 00:17:40.296 nvme0n2: ios=76/0, merge=0/0, ticks=3078/0, in_queue=3078, util=95.39% 00:17:40.296 nvme0n3: ios=1997/0, merge=0/0, ticks=2440/0, in_queue=2440, util=96.03% 00:17:40.296 nvme0n4: ios=1932/0, merge=0/0, ticks=1993/0, in_queue=1993, util=96.02% 00:17:40.296 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.296 11:29:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:40.553 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.553 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:40.553 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.553 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:40.811 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:40.811 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3532799 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:41.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:41.068 nvmf hotplug test: fio failed as expected 00:17:41.068 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.326 rmmod nvme_tcp 00:17:41.326 rmmod nvme_fabrics 00:17:41.326 rmmod nvme_keyring 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3529298 ']' 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3529298 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3529298 ']' 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3529298 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3529298 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3529298' 00:17:41.326 killing process with pid 3529298 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3529298 00:17:41.326 11:29:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3529298 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.585 11:29:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.126 11:29:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.126 00:17:44.126 real 0m28.285s 00:17:44.126 user 2m43.212s 00:17:44.126 sys 0m9.078s 00:17:44.126 11:29:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:44.126 11:29:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.126 ************************************ 00:17:44.126 END TEST nvmf_fio_target 00:17:44.126 ************************************ 00:17:44.126 11:29:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:44.126 11:29:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:44.126 11:29:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:44.126 11:29:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.126 11:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.126 ************************************ 00:17:44.126 START TEST nvmf_bdevio 00:17:44.126 ************************************ 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:44.126 * Looking for test storage... 00:17:44.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.126 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.127 11:29:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:50.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.701 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:50.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:50.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:50.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:17:50.702 00:17:50.702 --- 10.0.0.2 ping statistics --- 00:17:50.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.702 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:17:50.702 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:17:50.702 00:17:50.702 --- 10.0.0.1 ping statistics --- 00:17:50.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.702 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3538069 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3538069 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3538069 ']' 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.961 11:29:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.961 [2024-07-15 11:29:19.507779] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:17:50.961 [2024-07-15 11:29:19.507846] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.961 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.961 [2024-07-15 11:29:19.598266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.240 [2024-07-15 11:29:19.693618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.240 [2024-07-15 11:29:19.693684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.240 [2024-07-15 11:29:19.693693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.240 [2024-07-15 11:29:19.693700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.240 [2024-07-15 11:29:19.693706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.240 [2024-07-15 11:29:19.693871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:51.240 [2024-07-15 11:29:19.693903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:51.240 [2024-07-15 11:29:19.694044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.240 [2024-07-15 11:29:19.694044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.808 [2024-07-15 11:29:20.353240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.808 Malloc0 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.808 [2024-07-15 11:29:20.418989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:51.808 { 00:17:51.808 "params": { 00:17:51.808 "name": "Nvme$subsystem", 00:17:51.808 "trtype": "$TEST_TRANSPORT", 00:17:51.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:51.808 "adrfam": "ipv4", 00:17:51.808 "trsvcid": "$NVMF_PORT", 00:17:51.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:51.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:51.808 "hdgst": ${hdgst:-false}, 00:17:51.808 "ddgst": ${ddgst:-false} 00:17:51.808 }, 00:17:51.808 "method": "bdev_nvme_attach_controller" 00:17:51.808 } 00:17:51.808 EOF 00:17:51.808 )") 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:51.808 11:29:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:51.808 "params": { 00:17:51.808 "name": "Nvme1", 00:17:51.808 "trtype": "tcp", 00:17:51.808 "traddr": "10.0.0.2", 00:17:51.808 "adrfam": "ipv4", 00:17:51.808 "trsvcid": "4420", 00:17:51.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:51.808 "hdgst": false, 00:17:51.808 "ddgst": false 00:17:51.808 }, 00:17:51.808 "method": "bdev_nvme_attach_controller" 00:17:51.808 }' 00:17:51.808 [2024-07-15 11:29:20.475389] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:17:51.808 [2024-07-15 11:29:20.475457] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538365 ] 00:17:51.808 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.066 [2024-07-15 11:29:20.541708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.066 [2024-07-15 11:29:20.617906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.066 [2024-07-15 11:29:20.618024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.066 [2024-07-15 11:29:20.618027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.324 I/O targets: 00:17:52.324 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:52.324 00:17:52.324 00:17:52.324 CUnit - A unit testing framework for C - Version 2.1-3 00:17:52.324 http://cunit.sourceforge.net/ 00:17:52.324 00:17:52.324 00:17:52.324 Suite: bdevio tests on: Nvme1n1 00:17:52.324 Test: blockdev write read block ...passed 00:17:52.324 Test: blockdev write zeroes read block ...passed 00:17:52.324 Test: blockdev write zeroes read no split ...passed 00:17:52.324 Test: blockdev write zeroes read split ...passed 00:17:52.324 Test: blockdev write zeroes read split partial ...passed 00:17:52.324 Test: blockdev reset ...[2024-07-15 11:29:21.020906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:52.324 [2024-07-15 11:29:21.020975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1924ce0 (9): Bad file descriptor 00:17:52.582 [2024-07-15 11:29:21.039182] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:52.582 passed 00:17:52.582 Test: blockdev write read 8 blocks ...passed 00:17:52.582 Test: blockdev write read size > 128k ...passed 00:17:52.582 Test: blockdev write read invalid size ...passed 00:17:52.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:52.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:52.582 Test: blockdev write read max offset ...passed 00:17:52.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:52.582 Test: blockdev writev readv 8 blocks ...passed 00:17:52.582 Test: blockdev writev readv 30 x 1block ...passed 00:17:52.841 Test: blockdev writev readv block ...passed 00:17:52.841 Test: blockdev writev readv size > 128k ...passed 00:17:52.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:52.841 Test: blockdev comparev and writev ...[2024-07-15 11:29:21.308987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.309023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.309029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.309606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.309620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.309629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.309635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.310191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.310209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.310214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.310773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.310782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.310792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:52.841 [2024-07-15 11:29:21.310797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:52.841 passed 00:17:52.841 Test: blockdev nvme passthru rw ...passed 00:17:52.841 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:29:21.395025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:52.841 [2024-07-15 11:29:21.395036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.395533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:52.841 [2024-07-15 11:29:21.395542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.396058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:52.841 [2024-07-15 11:29:21.396067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:52.841 [2024-07-15 11:29:21.396594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:52.841 [2024-07-15 11:29:21.396602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:52.841 passed 00:17:52.841 Test: blockdev nvme admin passthru ...passed 00:17:52.841 Test: blockdev copy ...passed 00:17:52.841 00:17:52.841 Run Summary: Type Total Ran Passed Failed Inactive 00:17:52.841 suites 1 1 n/a 0 0 00:17:52.841 tests 23 23 23 0 0 00:17:52.841 asserts 152 152 152 0 n/a 00:17:52.841 00:17:52.841 Elapsed time = 1.234 seconds 00:17:53.099 11:29:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.099 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.099 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:53.099 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.099 11:29:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.100 rmmod nvme_tcp 00:17:53.100 rmmod nvme_fabrics 00:17:53.100 rmmod nvme_keyring 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3538069 ']' 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3538069 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3538069 ']' 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3538069 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3538069 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3538069' 00:17:53.100 killing process with pid 3538069 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3538069 00:17:53.100 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3538069 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.359 11:29:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.301 11:29:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.301 00:17:55.301 real 0m11.627s 00:17:55.301 user 0m12.827s 00:17:55.301 sys 0m5.784s 00:17:55.301 11:29:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.301 11:29:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:55.301 ************************************ 00:17:55.301 END TEST nvmf_bdevio 00:17:55.301 ************************************ 00:17:55.301 11:29:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.301 11:29:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:55.301 11:29:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.301 11:29:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.301 11:29:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.301 ************************************ 00:17:55.301 START TEST nvmf_auth_target 00:17:55.301 ************************************ 00:17:55.301 11:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:55.561 * Looking for test storage... 00:17:55.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.561 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.562 11:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:02.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:02.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:02.156 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:02.156 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.156 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.417 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.417 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.417 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.417 11:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.417 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.417 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:18:02.678 00:18:02.678 --- 10.0.0.2 ping statistics --- 00:18:02.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.678 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:18:02.678 00:18:02.678 --- 10.0.0.1 ping statistics --- 00:18:02.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.678 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3542697 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3542697 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3542697 ']' 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.678 11:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3542906 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c46e1b8b74b7042a130606f4e210252c4ab45268d368ae3a 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Z8W 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c46e1b8b74b7042a130606f4e210252c4ab45268d368ae3a 0 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c46e1b8b74b7042a130606f4e210252c4ab45268d368ae3a 0 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c46e1b8b74b7042a130606f4e210252c4ab45268d368ae3a 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Z8W 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Z8W 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Z8W 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1463e3d500ef1a77cbb2ad4fec7fcf40fca15f3dbbf2c06d7a7b9aafb7fb1bf2 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Sm7 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1463e3d500ef1a77cbb2ad4fec7fcf40fca15f3dbbf2c06d7a7b9aafb7fb1bf2 3 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1463e3d500ef1a77cbb2ad4fec7fcf40fca15f3dbbf2c06d7a7b9aafb7fb1bf2 3 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1463e3d500ef1a77cbb2ad4fec7fcf40fca15f3dbbf2c06d7a7b9aafb7fb1bf2 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Sm7 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Sm7 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Sm7 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44ef79dbec691c5213e0ba6496e2e6a3 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:03.618 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tMz 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44ef79dbec691c5213e0ba6496e2e6a3 1 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44ef79dbec691c5213e0ba6496e2e6a3 1 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44ef79dbec691c5213e0ba6496e2e6a3 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tMz 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tMz 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.tMz 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed760457c417256e690b6c28a4773327748c536444613ca8 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.t4n 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed760457c417256e690b6c28a4773327748c536444613ca8 2 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed760457c417256e690b6c28a4773327748c536444613ca8 2 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed760457c417256e690b6c28a4773327748c536444613ca8 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.t4n 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.t4n 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.t4n 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:03.619 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=03aa80323d742ca74d8e093082851fc86bd4709789e9441f 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eV6 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 03aa80323d742ca74d8e093082851fc86bd4709789e9441f 2 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 03aa80323d742ca74d8e093082851fc86bd4709789e9441f 2 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=03aa80323d742ca74d8e093082851fc86bd4709789e9441f 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eV6 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eV6 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.eV6 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:03.879 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d93c4c88e0b5bc72396e47dc7d5672d4 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RI3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d93c4c88e0b5bc72396e47dc7d5672d4 1 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d93c4c88e0b5bc72396e47dc7d5672d4 1 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d93c4c88e0b5bc72396e47dc7d5672d4 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RI3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RI3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.RI3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c3d185bb55757df981d1147adb548dc34313a114888e8b48c1ecb576d26cfb10 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.StJ 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c3d185bb55757df981d1147adb548dc34313a114888e8b48c1ecb576d26cfb10 3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c3d185bb55757df981d1147adb548dc34313a114888e8b48c1ecb576d26cfb10 3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c3d185bb55757df981d1147adb548dc34313a114888e8b48c1ecb576d26cfb10 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.StJ 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.StJ 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.StJ 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3542697 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3542697 ']' 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.880 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.139 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.139 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3542906 /var/tmp/host.sock 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3542906 ']' 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:04.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Z8W 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.140 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.400 11:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.400 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Z8W 00:18:04.400 11:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Z8W 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Sm7 ]] 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Sm7 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Sm7 00:18:04.400 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Sm7 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tMz 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tMz 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tMz 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.t4n ]] 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t4n 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t4n 00:18:04.660 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t4n 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eV6 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.eV6 00:18:04.919 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.eV6 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.RI3 ]] 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RI3 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RI3 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RI3 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.StJ 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.StJ 00:18:05.179 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.StJ 00:18:05.439 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:05.439 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:05.439 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.439 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.439 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.439 11:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.439 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.699 00:18:05.699 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.699 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.699 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.959 { 00:18:05.959 "cntlid": 1, 00:18:05.959 "qid": 0, 00:18:05.959 "state": "enabled", 00:18:05.959 "thread": "nvmf_tgt_poll_group_000", 00:18:05.959 "listen_address": { 00:18:05.959 "trtype": "TCP", 00:18:05.959 "adrfam": "IPv4", 00:18:05.959 "traddr": "10.0.0.2", 00:18:05.959 "trsvcid": "4420" 00:18:05.959 }, 00:18:05.959 "peer_address": { 00:18:05.959 "trtype": "TCP", 00:18:05.959 "adrfam": "IPv4", 00:18:05.959 "traddr": "10.0.0.1", 00:18:05.959 "trsvcid": "60882" 00:18:05.959 }, 00:18:05.959 "auth": { 00:18:05.959 "state": "completed", 00:18:05.959 "digest": "sha256", 00:18:05.959 "dhgroup": "null" 00:18:05.959 } 00:18:05.959 } 00:18:05.959 ]' 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.959 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.219 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.219 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.219 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.219 11:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.157 11:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.418 00:18:07.418 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.418 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.418 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.678 { 00:18:07.678 "cntlid": 3, 00:18:07.678 "qid": 0, 00:18:07.678 "state": "enabled", 00:18:07.678 "thread": "nvmf_tgt_poll_group_000", 00:18:07.678 "listen_address": { 00:18:07.678 "trtype": "TCP", 00:18:07.678 "adrfam": "IPv4", 00:18:07.678 "traddr": "10.0.0.2", 00:18:07.678 "trsvcid": "4420" 00:18:07.678 }, 00:18:07.678 "peer_address": { 00:18:07.678 "trtype": "TCP", 00:18:07.678 "adrfam": "IPv4", 00:18:07.678 "traddr": "10.0.0.1", 00:18:07.678 "trsvcid": "60902" 00:18:07.678 }, 00:18:07.678 "auth": { 00:18:07.678 "state": "completed", 00:18:07.678 "digest": "sha256", 00:18:07.678 "dhgroup": "null" 00:18:07.678 } 00:18:07.678 } 00:18:07.678 ]' 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.678 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.938 11:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:08.507 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.507 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.507 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.507 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.768 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.028 00:18:09.028 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.028 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.028 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.288 { 00:18:09.288 "cntlid": 5, 00:18:09.288 "qid": 0, 00:18:09.288 "state": "enabled", 00:18:09.288 "thread": "nvmf_tgt_poll_group_000", 00:18:09.288 "listen_address": { 00:18:09.288 "trtype": "TCP", 00:18:09.288 "adrfam": "IPv4", 00:18:09.288 "traddr": "10.0.0.2", 00:18:09.288 "trsvcid": "4420" 00:18:09.288 }, 00:18:09.288 "peer_address": { 00:18:09.288 "trtype": "TCP", 00:18:09.288 "adrfam": "IPv4", 00:18:09.288 "traddr": "10.0.0.1", 00:18:09.288 "trsvcid": "46460" 00:18:09.288 }, 00:18:09.288 "auth": { 00:18:09.288 "state": "completed", 00:18:09.288 "digest": "sha256", 00:18:09.288 "dhgroup": "null" 00:18:09.288 } 00:18:09.288 } 00:18:09.288 ]' 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.288 11:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.548 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:10.487 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.488 11:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.488 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.748 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.748 { 00:18:10.748 "cntlid": 7, 00:18:10.748 "qid": 0, 00:18:10.748 "state": "enabled", 00:18:10.748 "thread": "nvmf_tgt_poll_group_000", 00:18:10.748 "listen_address": { 00:18:10.748 "trtype": "TCP", 00:18:10.748 "adrfam": "IPv4", 00:18:10.748 "traddr": "10.0.0.2", 00:18:10.748 "trsvcid": "4420" 00:18:10.748 }, 00:18:10.748 "peer_address": { 00:18:10.748 "trtype": "TCP", 00:18:10.748 "adrfam": "IPv4", 00:18:10.748 "traddr": "10.0.0.1", 00:18:10.748 "trsvcid": "46490" 00:18:10.748 }, 00:18:10.748 "auth": { 00:18:10.748 "state": "completed", 00:18:10.748 "digest": "sha256", 00:18:10.748 "dhgroup": "null" 00:18:10.748 } 00:18:10.748 } 00:18:10.748 ]' 00:18:10.748 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.007 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.267 11:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.836 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.096 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.355 00:18:12.355 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.355 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.355 11:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.355 { 00:18:12.355 "cntlid": 9, 00:18:12.355 "qid": 0, 00:18:12.355 "state": "enabled", 00:18:12.355 "thread": "nvmf_tgt_poll_group_000", 00:18:12.355 "listen_address": { 00:18:12.355 "trtype": "TCP", 00:18:12.355 "adrfam": "IPv4", 00:18:12.355 "traddr": "10.0.0.2", 00:18:12.355 "trsvcid": "4420" 00:18:12.355 }, 00:18:12.355 "peer_address": { 00:18:12.355 "trtype": "TCP", 00:18:12.355 "adrfam": "IPv4", 00:18:12.355 "traddr": "10.0.0.1", 00:18:12.355 "trsvcid": "46518" 00:18:12.355 }, 00:18:12.355 "auth": { 00:18:12.355 "state": "completed", 00:18:12.355 "digest": "sha256", 00:18:12.355 "dhgroup": "ffdhe2048" 00:18:12.355 } 00:18:12.355 } 00:18:12.355 ]' 00:18:12.355 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.356 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.356 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.614 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.614 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.614 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.614 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.614 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.614 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:13.548 11:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.548 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.549 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.807 00:18:13.807 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.807 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.807 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.065 { 00:18:14.065 "cntlid": 11, 00:18:14.065 "qid": 0, 00:18:14.065 "state": "enabled", 00:18:14.065 "thread": "nvmf_tgt_poll_group_000", 00:18:14.065 "listen_address": { 00:18:14.065 "trtype": "TCP", 00:18:14.065 "adrfam": "IPv4", 00:18:14.065 "traddr": "10.0.0.2", 00:18:14.065 "trsvcid": "4420" 00:18:14.065 }, 00:18:14.065 "peer_address": { 00:18:14.065 "trtype": "TCP", 00:18:14.065 "adrfam": "IPv4", 00:18:14.065 "traddr": "10.0.0.1", 00:18:14.065 "trsvcid": "46542" 00:18:14.065 }, 00:18:14.065 "auth": { 00:18:14.065 "state": "completed", 00:18:14.065 "digest": "sha256", 00:18:14.065 "dhgroup": "ffdhe2048" 00:18:14.065 } 00:18:14.065 } 00:18:14.065 ]' 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.065 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.324 11:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.261 11:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.520 00:18:15.520 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.520 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.520 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.778 { 00:18:15.778 "cntlid": 13, 00:18:15.778 "qid": 0, 00:18:15.778 "state": "enabled", 00:18:15.778 "thread": "nvmf_tgt_poll_group_000", 00:18:15.778 "listen_address": { 00:18:15.778 "trtype": "TCP", 00:18:15.778 "adrfam": "IPv4", 00:18:15.778 "traddr": "10.0.0.2", 00:18:15.778 "trsvcid": "4420" 00:18:15.778 }, 00:18:15.778 "peer_address": { 00:18:15.778 "trtype": "TCP", 00:18:15.778 "adrfam": "IPv4", 00:18:15.778 "traddr": "10.0.0.1", 00:18:15.778 "trsvcid": "46582" 00:18:15.778 }, 00:18:15.778 "auth": { 00:18:15.778 "state": "completed", 00:18:15.778 "digest": "sha256", 00:18:15.778 "dhgroup": "ffdhe2048" 00:18:15.778 } 00:18:15.778 } 00:18:15.778 ]' 00:18:15.778 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.779 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.048 11:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.617 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.876 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.136 00:18:17.136 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.136 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.136 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.397 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.397 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.397 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.397 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.397 11:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.397 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.397 { 00:18:17.397 "cntlid": 15, 00:18:17.397 "qid": 0, 00:18:17.397 "state": "enabled", 00:18:17.397 "thread": "nvmf_tgt_poll_group_000", 00:18:17.397 "listen_address": { 00:18:17.397 "trtype": "TCP", 00:18:17.397 "adrfam": "IPv4", 00:18:17.398 "traddr": "10.0.0.2", 00:18:17.398 "trsvcid": "4420" 00:18:17.398 }, 00:18:17.398 "peer_address": { 00:18:17.398 "trtype": "TCP", 00:18:17.398 "adrfam": "IPv4", 00:18:17.398 "traddr": "10.0.0.1", 00:18:17.398 "trsvcid": "46610" 00:18:17.398 }, 00:18:17.398 "auth": { 00:18:17.398 "state": "completed", 00:18:17.398 "digest": "sha256", 00:18:17.398 "dhgroup": "ffdhe2048" 00:18:17.398 } 00:18:17.398 } 00:18:17.398 ]' 00:18:17.398 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.398 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.398 11:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.398 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.398 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.398 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.398 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.398 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.657 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.595 11:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.595 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.854 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.854 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.112 { 00:18:19.112 "cntlid": 17, 00:18:19.112 "qid": 0, 00:18:19.112 "state": "enabled", 00:18:19.112 "thread": "nvmf_tgt_poll_group_000", 00:18:19.112 "listen_address": { 00:18:19.112 "trtype": "TCP", 00:18:19.112 "adrfam": "IPv4", 00:18:19.112 "traddr": "10.0.0.2", 00:18:19.112 "trsvcid": "4420" 00:18:19.112 }, 00:18:19.112 "peer_address": { 00:18:19.112 "trtype": "TCP", 00:18:19.112 "adrfam": "IPv4", 00:18:19.112 "traddr": "10.0.0.1", 00:18:19.112 "trsvcid": "46642" 00:18:19.112 }, 00:18:19.112 "auth": { 00:18:19.112 "state": "completed", 00:18:19.112 "digest": "sha256", 00:18:19.112 "dhgroup": "ffdhe3072" 00:18:19.112 } 00:18:19.112 } 00:18:19.112 ]' 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.112 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.371 11:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.937 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.196 11:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.454 00:18:20.454 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.454 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.454 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.713 { 00:18:20.713 "cntlid": 19, 00:18:20.713 "qid": 0, 00:18:20.713 "state": "enabled", 00:18:20.713 "thread": "nvmf_tgt_poll_group_000", 00:18:20.713 "listen_address": { 00:18:20.713 "trtype": "TCP", 00:18:20.713 "adrfam": "IPv4", 00:18:20.713 "traddr": "10.0.0.2", 00:18:20.713 "trsvcid": "4420" 00:18:20.713 }, 00:18:20.713 "peer_address": { 00:18:20.713 "trtype": "TCP", 00:18:20.713 "adrfam": "IPv4", 00:18:20.713 "traddr": "10.0.0.1", 00:18:20.713 "trsvcid": "51748" 00:18:20.713 }, 00:18:20.713 "auth": { 00:18:20.713 "state": "completed", 00:18:20.713 "digest": "sha256", 00:18:20.713 "dhgroup": "ffdhe3072" 00:18:20.713 } 00:18:20.713 } 00:18:20.713 ]' 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.713 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.972 11:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.910 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.169 00:18:22.169 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.169 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.169 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.429 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.429 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.429 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.429 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.429 11:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.429 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.429 { 00:18:22.429 "cntlid": 21, 00:18:22.429 "qid": 0, 00:18:22.429 "state": "enabled", 00:18:22.429 "thread": "nvmf_tgt_poll_group_000", 00:18:22.429 "listen_address": { 00:18:22.429 "trtype": "TCP", 00:18:22.429 "adrfam": "IPv4", 00:18:22.429 "traddr": "10.0.0.2", 00:18:22.429 "trsvcid": "4420" 00:18:22.429 }, 00:18:22.429 "peer_address": { 00:18:22.429 "trtype": "TCP", 00:18:22.429 "adrfam": "IPv4", 00:18:22.429 "traddr": "10.0.0.1", 00:18:22.429 "trsvcid": "51764" 00:18:22.430 }, 00:18:22.430 "auth": { 00:18:22.430 "state": "completed", 00:18:22.430 "digest": "sha256", 00:18:22.430 "dhgroup": "ffdhe3072" 00:18:22.430 } 00:18:22.430 } 00:18:22.430 ]' 00:18:22.430 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.430 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.430 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.430 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.430 11:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.430 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.430 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.430 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.689 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:23.259 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.259 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.259 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.259 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.519 11:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.519 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.519 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.519 11:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.519 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.778 00:18:23.778 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.778 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.778 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.037 { 00:18:24.037 "cntlid": 23, 00:18:24.037 "qid": 0, 00:18:24.037 "state": "enabled", 00:18:24.037 "thread": "nvmf_tgt_poll_group_000", 00:18:24.037 "listen_address": { 00:18:24.037 "trtype": "TCP", 00:18:24.037 "adrfam": "IPv4", 00:18:24.037 "traddr": "10.0.0.2", 00:18:24.037 "trsvcid": "4420" 00:18:24.037 }, 00:18:24.037 "peer_address": { 00:18:24.037 "trtype": "TCP", 00:18:24.037 "adrfam": "IPv4", 00:18:24.037 "traddr": "10.0.0.1", 00:18:24.037 "trsvcid": "51798" 00:18:24.037 }, 00:18:24.037 "auth": { 00:18:24.037 "state": "completed", 00:18:24.037 "digest": "sha256", 00:18:24.037 "dhgroup": "ffdhe3072" 00:18:24.037 } 00:18:24.037 } 00:18:24.037 ]' 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.037 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.297 11:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.234 11:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.494 00:18:25.494 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.494 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.494 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.754 { 00:18:25.754 "cntlid": 25, 00:18:25.754 "qid": 0, 00:18:25.754 "state": "enabled", 00:18:25.754 "thread": "nvmf_tgt_poll_group_000", 00:18:25.754 "listen_address": { 00:18:25.754 "trtype": "TCP", 00:18:25.754 "adrfam": "IPv4", 00:18:25.754 "traddr": "10.0.0.2", 00:18:25.754 "trsvcid": "4420" 00:18:25.754 }, 00:18:25.754 "peer_address": { 00:18:25.754 "trtype": "TCP", 00:18:25.754 "adrfam": "IPv4", 00:18:25.754 "traddr": "10.0.0.1", 00:18:25.754 "trsvcid": "51830" 00:18:25.754 }, 00:18:25.754 "auth": { 00:18:25.754 "state": "completed", 00:18:25.754 "digest": "sha256", 00:18:25.754 "dhgroup": "ffdhe4096" 00:18:25.754 } 00:18:25.754 } 00:18:25.754 ]' 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.754 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.755 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.014 11:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.952 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.212 00:18:27.212 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.212 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.212 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.471 { 00:18:27.471 "cntlid": 27, 00:18:27.471 "qid": 0, 00:18:27.471 "state": "enabled", 00:18:27.471 "thread": "nvmf_tgt_poll_group_000", 00:18:27.471 "listen_address": { 00:18:27.471 "trtype": "TCP", 00:18:27.471 "adrfam": "IPv4", 00:18:27.471 "traddr": "10.0.0.2", 00:18:27.471 "trsvcid": "4420" 00:18:27.471 }, 00:18:27.471 "peer_address": { 00:18:27.471 "trtype": "TCP", 00:18:27.471 "adrfam": "IPv4", 00:18:27.471 "traddr": "10.0.0.1", 00:18:27.471 "trsvcid": "51856" 00:18:27.471 }, 00:18:27.471 "auth": { 00:18:27.471 "state": "completed", 00:18:27.471 "digest": "sha256", 00:18:27.471 "dhgroup": "ffdhe4096" 00:18:27.471 } 00:18:27.471 } 00:18:27.471 ]' 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.471 11:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.471 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.471 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.471 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.471 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.471 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.471 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.731 11:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.671 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.931 00:18:28.931 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.931 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.931 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.191 { 00:18:29.191 "cntlid": 29, 00:18:29.191 "qid": 0, 00:18:29.191 "state": "enabled", 00:18:29.191 "thread": "nvmf_tgt_poll_group_000", 00:18:29.191 "listen_address": { 00:18:29.191 "trtype": "TCP", 00:18:29.191 "adrfam": "IPv4", 00:18:29.191 "traddr": "10.0.0.2", 00:18:29.191 "trsvcid": "4420" 00:18:29.191 }, 00:18:29.191 "peer_address": { 00:18:29.191 "trtype": "TCP", 00:18:29.191 "adrfam": "IPv4", 00:18:29.191 "traddr": "10.0.0.1", 00:18:29.191 "trsvcid": "51876" 00:18:29.191 }, 00:18:29.191 "auth": { 00:18:29.191 "state": "completed", 00:18:29.191 "digest": "sha256", 00:18:29.191 "dhgroup": "ffdhe4096" 00:18:29.191 } 00:18:29.191 } 00:18:29.191 ]' 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.191 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.450 11:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:30.019 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.278 11:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.538 00:18:30.538 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.538 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.538 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.797 { 00:18:30.797 "cntlid": 31, 00:18:30.797 "qid": 0, 00:18:30.797 "state": "enabled", 00:18:30.797 "thread": "nvmf_tgt_poll_group_000", 00:18:30.797 "listen_address": { 00:18:30.797 "trtype": "TCP", 00:18:30.797 "adrfam": "IPv4", 00:18:30.797 "traddr": "10.0.0.2", 00:18:30.797 "trsvcid": "4420" 00:18:30.797 }, 00:18:30.797 "peer_address": { 00:18:30.797 "trtype": "TCP", 00:18:30.797 "adrfam": "IPv4", 00:18:30.797 "traddr": "10.0.0.1", 00:18:30.797 "trsvcid": "40588" 00:18:30.797 }, 00:18:30.797 "auth": { 00:18:30.797 "state": "completed", 00:18:30.797 "digest": "sha256", 00:18:30.797 "dhgroup": "ffdhe4096" 00:18:30.797 } 00:18:30.797 } 00:18:30.797 ]' 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.797 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.057 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.057 11:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.017 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.278 00:18:32.278 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.278 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.278 11:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.538 { 00:18:32.538 "cntlid": 33, 00:18:32.538 "qid": 0, 00:18:32.538 "state": "enabled", 00:18:32.538 "thread": "nvmf_tgt_poll_group_000", 00:18:32.538 "listen_address": { 00:18:32.538 "trtype": "TCP", 00:18:32.538 "adrfam": "IPv4", 00:18:32.538 "traddr": "10.0.0.2", 00:18:32.538 "trsvcid": "4420" 00:18:32.538 }, 00:18:32.538 "peer_address": { 00:18:32.538 "trtype": "TCP", 00:18:32.538 "adrfam": "IPv4", 00:18:32.538 "traddr": "10.0.0.1", 00:18:32.538 "trsvcid": "40610" 00:18:32.538 }, 00:18:32.538 "auth": { 00:18:32.538 "state": "completed", 00:18:32.538 "digest": "sha256", 00:18:32.538 "dhgroup": "ffdhe6144" 00:18:32.538 } 00:18:32.538 } 00:18:32.538 ]' 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.538 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.797 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.797 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.797 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.797 11:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.771 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.032 00:18:34.032 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.032 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.032 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.293 { 00:18:34.293 "cntlid": 35, 00:18:34.293 "qid": 0, 00:18:34.293 "state": "enabled", 00:18:34.293 "thread": "nvmf_tgt_poll_group_000", 00:18:34.293 "listen_address": { 00:18:34.293 "trtype": "TCP", 00:18:34.293 "adrfam": "IPv4", 00:18:34.293 "traddr": "10.0.0.2", 00:18:34.293 "trsvcid": "4420" 00:18:34.293 }, 00:18:34.293 "peer_address": { 00:18:34.293 "trtype": "TCP", 00:18:34.293 "adrfam": "IPv4", 00:18:34.293 "traddr": "10.0.0.1", 00:18:34.293 "trsvcid": "40634" 00:18:34.293 }, 00:18:34.293 "auth": { 00:18:34.293 "state": "completed", 00:18:34.293 "digest": "sha256", 00:18:34.293 "dhgroup": "ffdhe6144" 00:18:34.293 } 00:18:34.293 } 00:18:34.293 ]' 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.293 11:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.554 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.554 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.554 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.554 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.495 11:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.495 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.755 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.016 { 00:18:36.016 "cntlid": 37, 00:18:36.016 "qid": 0, 00:18:36.016 "state": "enabled", 00:18:36.016 "thread": "nvmf_tgt_poll_group_000", 00:18:36.016 "listen_address": { 00:18:36.016 "trtype": "TCP", 00:18:36.016 "adrfam": "IPv4", 00:18:36.016 "traddr": "10.0.0.2", 00:18:36.016 "trsvcid": "4420" 00:18:36.016 }, 00:18:36.016 "peer_address": { 00:18:36.016 "trtype": "TCP", 00:18:36.016 "adrfam": "IPv4", 00:18:36.016 "traddr": "10.0.0.1", 00:18:36.016 "trsvcid": "40654" 00:18:36.016 }, 00:18:36.016 "auth": { 00:18:36.016 "state": "completed", 00:18:36.016 "digest": "sha256", 00:18:36.016 "dhgroup": "ffdhe6144" 00:18:36.016 } 00:18:36.016 } 00:18:36.016 ]' 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.016 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.017 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.277 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.277 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.277 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.277 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.277 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.277 11:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.220 11:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.481 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.743 { 00:18:37.743 "cntlid": 39, 00:18:37.743 "qid": 0, 00:18:37.743 "state": "enabled", 00:18:37.743 "thread": "nvmf_tgt_poll_group_000", 00:18:37.743 "listen_address": { 00:18:37.743 "trtype": "TCP", 00:18:37.743 "adrfam": "IPv4", 00:18:37.743 "traddr": "10.0.0.2", 00:18:37.743 "trsvcid": "4420" 00:18:37.743 }, 00:18:37.743 "peer_address": { 00:18:37.743 "trtype": "TCP", 00:18:37.743 "adrfam": "IPv4", 00:18:37.743 "traddr": "10.0.0.1", 00:18:37.743 "trsvcid": "40676" 00:18:37.743 }, 00:18:37.743 "auth": { 00:18:37.743 "state": "completed", 00:18:37.743 "digest": "sha256", 00:18:37.743 "dhgroup": "ffdhe6144" 00:18:37.743 } 00:18:37.743 } 00:18:37.743 ]' 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.743 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.008 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.008 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.008 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.008 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.008 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.008 11:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.951 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.211 11:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.783 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.783 { 00:18:39.783 "cntlid": 41, 00:18:39.783 "qid": 0, 00:18:39.783 "state": "enabled", 00:18:39.783 "thread": "nvmf_tgt_poll_group_000", 00:18:39.783 "listen_address": { 00:18:39.783 "trtype": "TCP", 00:18:39.783 "adrfam": "IPv4", 00:18:39.783 "traddr": "10.0.0.2", 00:18:39.783 "trsvcid": "4420" 00:18:39.783 }, 00:18:39.783 "peer_address": { 00:18:39.783 "trtype": "TCP", 00:18:39.783 "adrfam": "IPv4", 00:18:39.783 "traddr": "10.0.0.1", 00:18:39.783 "trsvcid": "41624" 00:18:39.783 }, 00:18:39.783 "auth": { 00:18:39.783 "state": "completed", 00:18:39.783 "digest": "sha256", 00:18:39.783 "dhgroup": "ffdhe8192" 00:18:39.783 } 00:18:39.783 } 00:18:39.783 ]' 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.783 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.043 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.043 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.043 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.043 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.043 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.044 11:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.983 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.554 00:18:41.554 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.554 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.554 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.815 { 00:18:41.815 "cntlid": 43, 00:18:41.815 "qid": 0, 00:18:41.815 "state": "enabled", 00:18:41.815 "thread": "nvmf_tgt_poll_group_000", 00:18:41.815 "listen_address": { 00:18:41.815 "trtype": "TCP", 00:18:41.815 "adrfam": "IPv4", 00:18:41.815 "traddr": "10.0.0.2", 00:18:41.815 "trsvcid": "4420" 00:18:41.815 }, 00:18:41.815 "peer_address": { 00:18:41.815 "trtype": "TCP", 00:18:41.815 "adrfam": "IPv4", 00:18:41.815 "traddr": "10.0.0.1", 00:18:41.815 "trsvcid": "41636" 00:18:41.815 }, 00:18:41.815 "auth": { 00:18:41.815 "state": "completed", 00:18:41.815 "digest": "sha256", 00:18:41.815 "dhgroup": "ffdhe8192" 00:18:41.815 } 00:18:41.815 } 00:18:41.815 ]' 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.815 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.075 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.075 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.075 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.075 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.016 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.586 00:18:43.586 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.586 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.586 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.846 { 00:18:43.846 "cntlid": 45, 00:18:43.846 "qid": 0, 00:18:43.846 "state": "enabled", 00:18:43.846 "thread": "nvmf_tgt_poll_group_000", 00:18:43.846 "listen_address": { 00:18:43.846 "trtype": "TCP", 00:18:43.846 "adrfam": "IPv4", 00:18:43.846 "traddr": "10.0.0.2", 00:18:43.846 "trsvcid": "4420" 00:18:43.846 }, 00:18:43.846 "peer_address": { 00:18:43.846 "trtype": "TCP", 00:18:43.846 "adrfam": "IPv4", 00:18:43.846 "traddr": "10.0.0.1", 00:18:43.846 "trsvcid": "41654" 00:18:43.846 }, 00:18:43.846 "auth": { 00:18:43.846 "state": "completed", 00:18:43.846 "digest": "sha256", 00:18:43.846 "dhgroup": "ffdhe8192" 00:18:43.846 } 00:18:43.846 } 00:18:43.846 ]' 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.846 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.106 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.046 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.047 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.617 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.617 { 00:18:45.617 "cntlid": 47, 00:18:45.617 "qid": 0, 00:18:45.617 "state": "enabled", 00:18:45.617 "thread": "nvmf_tgt_poll_group_000", 00:18:45.617 "listen_address": { 00:18:45.617 "trtype": "TCP", 00:18:45.617 "adrfam": "IPv4", 00:18:45.617 "traddr": "10.0.0.2", 00:18:45.617 "trsvcid": "4420" 00:18:45.617 }, 00:18:45.617 "peer_address": { 00:18:45.617 "trtype": "TCP", 00:18:45.617 "adrfam": "IPv4", 00:18:45.617 "traddr": "10.0.0.1", 00:18:45.617 "trsvcid": "41700" 00:18:45.617 }, 00:18:45.617 "auth": { 00:18:45.617 "state": "completed", 00:18:45.617 "digest": "sha256", 00:18:45.617 "dhgroup": "ffdhe8192" 00:18:45.617 } 00:18:45.617 } 00:18:45.617 ]' 00:18:45.617 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.877 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.137 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.709 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.969 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.269 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.269 { 00:18:47.269 "cntlid": 49, 00:18:47.269 "qid": 0, 00:18:47.269 "state": "enabled", 00:18:47.269 "thread": "nvmf_tgt_poll_group_000", 00:18:47.269 "listen_address": { 00:18:47.269 "trtype": "TCP", 00:18:47.269 "adrfam": "IPv4", 00:18:47.269 "traddr": "10.0.0.2", 00:18:47.269 "trsvcid": "4420" 00:18:47.269 }, 00:18:47.269 "peer_address": { 00:18:47.269 "trtype": "TCP", 00:18:47.269 "adrfam": "IPv4", 00:18:47.269 "traddr": "10.0.0.1", 00:18:47.269 "trsvcid": "41732" 00:18:47.269 }, 00:18:47.269 "auth": { 00:18:47.269 "state": "completed", 00:18:47.269 "digest": "sha384", 00:18:47.269 "dhgroup": "null" 00:18:47.269 } 00:18:47.269 } 00:18:47.269 ]' 00:18:47.269 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.532 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.532 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.532 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.532 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.532 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.533 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.533 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.533 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:48.474 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.474 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.474 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.475 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.475 11:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.475 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.475 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.475 11:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.475 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.736 00:18:48.736 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.736 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.736 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.998 { 00:18:48.998 "cntlid": 51, 00:18:48.998 "qid": 0, 00:18:48.998 "state": "enabled", 00:18:48.998 "thread": "nvmf_tgt_poll_group_000", 00:18:48.998 "listen_address": { 00:18:48.998 "trtype": "TCP", 00:18:48.998 "adrfam": "IPv4", 00:18:48.998 "traddr": "10.0.0.2", 00:18:48.998 "trsvcid": "4420" 00:18:48.998 }, 00:18:48.998 "peer_address": { 00:18:48.998 "trtype": "TCP", 00:18:48.998 "adrfam": "IPv4", 00:18:48.998 "traddr": "10.0.0.1", 00:18:48.998 "trsvcid": "41756" 00:18:48.998 }, 00:18:48.998 "auth": { 00:18:48.998 "state": "completed", 00:18:48.998 "digest": "sha384", 00:18:48.998 "dhgroup": "null" 00:18:48.998 } 00:18:48.998 } 00:18:48.998 ]' 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.998 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.258 11:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.198 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.459 00:18:50.460 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.460 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.460 11:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.460 { 00:18:50.460 "cntlid": 53, 00:18:50.460 "qid": 0, 00:18:50.460 "state": "enabled", 00:18:50.460 "thread": "nvmf_tgt_poll_group_000", 00:18:50.460 "listen_address": { 00:18:50.460 "trtype": "TCP", 00:18:50.460 "adrfam": "IPv4", 00:18:50.460 "traddr": "10.0.0.2", 00:18:50.460 "trsvcid": "4420" 00:18:50.460 }, 00:18:50.460 "peer_address": { 00:18:50.460 "trtype": "TCP", 00:18:50.460 "adrfam": "IPv4", 00:18:50.460 "traddr": "10.0.0.1", 00:18:50.460 "trsvcid": "37656" 00:18:50.460 }, 00:18:50.460 "auth": { 00:18:50.460 "state": "completed", 00:18:50.460 "digest": "sha384", 00:18:50.460 "dhgroup": "null" 00:18:50.460 } 00:18:50.460 } 00:18:50.460 ]' 00:18:50.460 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.721 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.982 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:51.553 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.814 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.074 00:18:52.074 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.074 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.074 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.335 { 00:18:52.335 "cntlid": 55, 00:18:52.335 "qid": 0, 00:18:52.335 "state": "enabled", 00:18:52.335 "thread": "nvmf_tgt_poll_group_000", 00:18:52.335 "listen_address": { 00:18:52.335 "trtype": "TCP", 00:18:52.335 "adrfam": "IPv4", 00:18:52.335 "traddr": "10.0.0.2", 00:18:52.335 "trsvcid": "4420" 00:18:52.335 }, 00:18:52.335 "peer_address": { 00:18:52.335 "trtype": "TCP", 00:18:52.335 "adrfam": "IPv4", 00:18:52.335 "traddr": "10.0.0.1", 00:18:52.335 "trsvcid": "37690" 00:18:52.335 }, 00:18:52.335 "auth": { 00:18:52.335 "state": "completed", 00:18:52.335 "digest": "sha384", 00:18:52.335 "dhgroup": "null" 00:18:52.335 } 00:18:52.335 } 00:18:52.335 ]' 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.335 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.595 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.164 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.424 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.425 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.684 00:18:53.684 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.684 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.684 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.684 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.943 { 00:18:53.943 "cntlid": 57, 00:18:53.943 "qid": 0, 00:18:53.943 "state": "enabled", 00:18:53.943 "thread": "nvmf_tgt_poll_group_000", 00:18:53.943 "listen_address": { 00:18:53.943 "trtype": "TCP", 00:18:53.943 "adrfam": "IPv4", 00:18:53.943 "traddr": "10.0.0.2", 00:18:53.943 "trsvcid": "4420" 00:18:53.943 }, 00:18:53.943 "peer_address": { 00:18:53.943 "trtype": "TCP", 00:18:53.943 "adrfam": "IPv4", 00:18:53.943 "traddr": "10.0.0.1", 00:18:53.943 "trsvcid": "37714" 00:18:53.943 }, 00:18:53.943 "auth": { 00:18:53.943 "state": "completed", 00:18:53.943 "digest": "sha384", 00:18:53.943 "dhgroup": "ffdhe2048" 00:18:53.943 } 00:18:53.943 } 00:18:53.943 ]' 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.943 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.202 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.769 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.028 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.287 00:18:55.287 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.287 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.287 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.547 { 00:18:55.547 "cntlid": 59, 00:18:55.547 "qid": 0, 00:18:55.547 "state": "enabled", 00:18:55.547 "thread": "nvmf_tgt_poll_group_000", 00:18:55.547 "listen_address": { 00:18:55.547 "trtype": "TCP", 00:18:55.547 "adrfam": "IPv4", 00:18:55.547 "traddr": "10.0.0.2", 00:18:55.547 "trsvcid": "4420" 00:18:55.547 }, 00:18:55.547 "peer_address": { 00:18:55.547 "trtype": "TCP", 00:18:55.547 "adrfam": "IPv4", 00:18:55.547 "traddr": "10.0.0.1", 00:18:55.547 "trsvcid": "37746" 00:18:55.547 }, 00:18:55.547 "auth": { 00:18:55.547 "state": "completed", 00:18:55.547 "digest": "sha384", 00:18:55.547 "dhgroup": "ffdhe2048" 00:18:55.547 } 00:18:55.547 } 00:18:55.547 ]' 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.547 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.806 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:18:56.374 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.634 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.892 00:18:56.892 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.892 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.892 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.152 { 00:18:57.152 "cntlid": 61, 00:18:57.152 "qid": 0, 00:18:57.152 "state": "enabled", 00:18:57.152 "thread": "nvmf_tgt_poll_group_000", 00:18:57.152 "listen_address": { 00:18:57.152 "trtype": "TCP", 00:18:57.152 "adrfam": "IPv4", 00:18:57.152 "traddr": "10.0.0.2", 00:18:57.152 "trsvcid": "4420" 00:18:57.152 }, 00:18:57.152 "peer_address": { 00:18:57.152 "trtype": "TCP", 00:18:57.152 "adrfam": "IPv4", 00:18:57.152 "traddr": "10.0.0.1", 00:18:57.152 "trsvcid": "37776" 00:18:57.152 }, 00:18:57.152 "auth": { 00:18:57.152 "state": "completed", 00:18:57.152 "digest": "sha384", 00:18:57.152 "dhgroup": "ffdhe2048" 00:18:57.152 } 00:18:57.152 } 00:18:57.152 ]' 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.152 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.411 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.351 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.610 00:18:58.610 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.610 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.610 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.870 { 00:18:58.870 "cntlid": 63, 00:18:58.870 "qid": 0, 00:18:58.870 "state": "enabled", 00:18:58.870 "thread": "nvmf_tgt_poll_group_000", 00:18:58.870 "listen_address": { 00:18:58.870 "trtype": "TCP", 00:18:58.870 "adrfam": "IPv4", 00:18:58.870 "traddr": "10.0.0.2", 00:18:58.870 "trsvcid": "4420" 00:18:58.870 }, 00:18:58.870 "peer_address": { 00:18:58.870 "trtype": "TCP", 00:18:58.870 "adrfam": "IPv4", 00:18:58.870 "traddr": "10.0.0.1", 00:18:58.870 "trsvcid": "37812" 00:18:58.870 }, 00:18:58.870 "auth": { 00:18:58.870 "state": "completed", 00:18:58.870 "digest": "sha384", 00:18:58.870 "dhgroup": "ffdhe2048" 00:18:58.870 } 00:18:58.870 } 00:18:58.870 ]' 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.870 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.131 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:18:59.700 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.960 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.220 00:19:00.220 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.220 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.220 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.479 { 00:19:00.479 "cntlid": 65, 00:19:00.479 "qid": 0, 00:19:00.479 "state": "enabled", 00:19:00.479 "thread": "nvmf_tgt_poll_group_000", 00:19:00.479 "listen_address": { 00:19:00.479 "trtype": "TCP", 00:19:00.479 "adrfam": "IPv4", 00:19:00.479 "traddr": "10.0.0.2", 00:19:00.479 "trsvcid": "4420" 00:19:00.479 }, 00:19:00.479 "peer_address": { 00:19:00.479 "trtype": "TCP", 00:19:00.479 "adrfam": "IPv4", 00:19:00.479 "traddr": "10.0.0.1", 00:19:00.479 "trsvcid": "33388" 00:19:00.479 }, 00:19:00.479 "auth": { 00:19:00.479 "state": "completed", 00:19:00.479 "digest": "sha384", 00:19:00.479 "dhgroup": "ffdhe3072" 00:19:00.479 } 00:19:00.479 } 00:19:00.479 ]' 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.479 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.739 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.678 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.937 00:19:01.937 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.937 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.937 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.197 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.198 { 00:19:02.198 "cntlid": 67, 00:19:02.198 "qid": 0, 00:19:02.198 "state": "enabled", 00:19:02.198 "thread": "nvmf_tgt_poll_group_000", 00:19:02.198 "listen_address": { 00:19:02.198 "trtype": "TCP", 00:19:02.198 "adrfam": "IPv4", 00:19:02.198 "traddr": "10.0.0.2", 00:19:02.198 "trsvcid": "4420" 00:19:02.198 }, 00:19:02.198 "peer_address": { 00:19:02.198 "trtype": "TCP", 00:19:02.198 "adrfam": "IPv4", 00:19:02.198 "traddr": "10.0.0.1", 00:19:02.198 "trsvcid": "33406" 00:19:02.198 }, 00:19:02.198 "auth": { 00:19:02.198 "state": "completed", 00:19:02.198 "digest": "sha384", 00:19:02.198 "dhgroup": "ffdhe3072" 00:19:02.198 } 00:19:02.198 } 00:19:02.198 ]' 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.198 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.468 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:03.046 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.307 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.567 00:19:03.567 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.567 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.567 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.828 { 00:19:03.828 "cntlid": 69, 00:19:03.828 "qid": 0, 00:19:03.828 "state": "enabled", 00:19:03.828 "thread": "nvmf_tgt_poll_group_000", 00:19:03.828 "listen_address": { 00:19:03.828 "trtype": "TCP", 00:19:03.828 "adrfam": "IPv4", 00:19:03.828 "traddr": "10.0.0.2", 00:19:03.828 "trsvcid": "4420" 00:19:03.828 }, 00:19:03.828 "peer_address": { 00:19:03.828 "trtype": "TCP", 00:19:03.828 "adrfam": "IPv4", 00:19:03.828 "traddr": "10.0.0.1", 00:19:03.828 "trsvcid": "33422" 00:19:03.828 }, 00:19:03.828 "auth": { 00:19:03.828 "state": "completed", 00:19:03.828 "digest": "sha384", 00:19:03.828 "dhgroup": "ffdhe3072" 00:19:03.828 } 00:19:03.828 } 00:19:03.828 ]' 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.828 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.088 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:04.659 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.919 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.180 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.180 00:19:05.441 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.441 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.441 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.441 { 00:19:05.441 "cntlid": 71, 00:19:05.441 "qid": 0, 00:19:05.441 "state": "enabled", 00:19:05.441 "thread": "nvmf_tgt_poll_group_000", 00:19:05.441 "listen_address": { 00:19:05.441 "trtype": "TCP", 00:19:05.441 "adrfam": "IPv4", 00:19:05.441 "traddr": "10.0.0.2", 00:19:05.441 "trsvcid": "4420" 00:19:05.441 }, 00:19:05.441 "peer_address": { 00:19:05.441 "trtype": "TCP", 00:19:05.441 "adrfam": "IPv4", 00:19:05.441 "traddr": "10.0.0.1", 00:19:05.441 "trsvcid": "33442" 00:19:05.441 }, 00:19:05.441 "auth": { 00:19:05.441 "state": "completed", 00:19:05.441 "digest": "sha384", 00:19:05.441 "dhgroup": "ffdhe3072" 00:19:05.441 } 00:19:05.441 } 00:19:05.441 ]' 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.441 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.701 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.701 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.701 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.701 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.702 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.702 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.643 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.904 00:19:06.904 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.904 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.904 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.191 { 00:19:07.191 "cntlid": 73, 00:19:07.191 "qid": 0, 00:19:07.191 "state": "enabled", 00:19:07.191 "thread": "nvmf_tgt_poll_group_000", 00:19:07.191 "listen_address": { 00:19:07.191 "trtype": "TCP", 00:19:07.191 "adrfam": "IPv4", 00:19:07.191 "traddr": "10.0.0.2", 00:19:07.191 "trsvcid": "4420" 00:19:07.191 }, 00:19:07.191 "peer_address": { 00:19:07.191 "trtype": "TCP", 00:19:07.191 "adrfam": "IPv4", 00:19:07.191 "traddr": "10.0.0.1", 00:19:07.191 "trsvcid": "33464" 00:19:07.191 }, 00:19:07.191 "auth": { 00:19:07.191 "state": "completed", 00:19:07.191 "digest": "sha384", 00:19:07.191 "dhgroup": "ffdhe4096" 00:19:07.191 } 00:19:07.191 } 00:19:07.191 ]' 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.191 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.451 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.021 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.282 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.542 00:19:08.542 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.542 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.542 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.803 { 00:19:08.803 "cntlid": 75, 00:19:08.803 "qid": 0, 00:19:08.803 "state": "enabled", 00:19:08.803 "thread": "nvmf_tgt_poll_group_000", 00:19:08.803 "listen_address": { 00:19:08.803 "trtype": "TCP", 00:19:08.803 "adrfam": "IPv4", 00:19:08.803 "traddr": "10.0.0.2", 00:19:08.803 "trsvcid": "4420" 00:19:08.803 }, 00:19:08.803 "peer_address": { 00:19:08.803 "trtype": "TCP", 00:19:08.803 "adrfam": "IPv4", 00:19:08.803 "traddr": "10.0.0.1", 00:19:08.803 "trsvcid": "33478" 00:19:08.803 }, 00:19:08.803 "auth": { 00:19:08.803 "state": "completed", 00:19:08.803 "digest": "sha384", 00:19:08.803 "dhgroup": "ffdhe4096" 00:19:08.803 } 00:19:08.803 } 00:19:08.803 ]' 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.803 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.063 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.006 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.007 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.007 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.267 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.267 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.528 11:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.528 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.528 { 00:19:10.528 "cntlid": 77, 00:19:10.528 "qid": 0, 00:19:10.528 "state": "enabled", 00:19:10.528 "thread": "nvmf_tgt_poll_group_000", 00:19:10.528 "listen_address": { 00:19:10.528 "trtype": "TCP", 00:19:10.528 "adrfam": "IPv4", 00:19:10.528 "traddr": "10.0.0.2", 00:19:10.528 "trsvcid": "4420" 00:19:10.528 }, 00:19:10.528 "peer_address": { 00:19:10.528 "trtype": "TCP", 00:19:10.528 "adrfam": "IPv4", 00:19:10.528 "traddr": "10.0.0.1", 00:19:10.528 "trsvcid": "40466" 00:19:10.528 }, 00:19:10.528 "auth": { 00:19:10.528 "state": "completed", 00:19:10.528 "digest": "sha384", 00:19:10.528 "dhgroup": "ffdhe4096" 00:19:10.528 } 00:19:10.528 } 00:19:10.528 ]' 00:19:10.528 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.528 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.788 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.355 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.615 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.875 00:19:11.875 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.875 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.875 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.135 { 00:19:12.135 "cntlid": 79, 00:19:12.135 "qid": 0, 00:19:12.135 "state": "enabled", 00:19:12.135 "thread": "nvmf_tgt_poll_group_000", 00:19:12.135 "listen_address": { 00:19:12.135 "trtype": "TCP", 00:19:12.135 "adrfam": "IPv4", 00:19:12.135 "traddr": "10.0.0.2", 00:19:12.135 "trsvcid": "4420" 00:19:12.135 }, 00:19:12.135 "peer_address": { 00:19:12.135 "trtype": "TCP", 00:19:12.135 "adrfam": "IPv4", 00:19:12.135 "traddr": "10.0.0.1", 00:19:12.135 "trsvcid": "40502" 00:19:12.135 }, 00:19:12.135 "auth": { 00:19:12.135 "state": "completed", 00:19:12.135 "digest": "sha384", 00:19:12.135 "dhgroup": "ffdhe4096" 00:19:12.135 } 00:19:12.135 } 00:19:12.135 ]' 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.135 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.394 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.333 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.593 00:19:13.593 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.593 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.593 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.854 { 00:19:13.854 "cntlid": 81, 00:19:13.854 "qid": 0, 00:19:13.854 "state": "enabled", 00:19:13.854 "thread": "nvmf_tgt_poll_group_000", 00:19:13.854 "listen_address": { 00:19:13.854 "trtype": "TCP", 00:19:13.854 "adrfam": "IPv4", 00:19:13.854 "traddr": "10.0.0.2", 00:19:13.854 "trsvcid": "4420" 00:19:13.854 }, 00:19:13.854 "peer_address": { 00:19:13.854 "trtype": "TCP", 00:19:13.854 "adrfam": "IPv4", 00:19:13.854 "traddr": "10.0.0.1", 00:19:13.854 "trsvcid": "40540" 00:19:13.854 }, 00:19:13.854 "auth": { 00:19:13.854 "state": "completed", 00:19:13.854 "digest": "sha384", 00:19:13.854 "dhgroup": "ffdhe6144" 00:19:13.854 } 00:19:13.854 } 00:19:13.854 ]' 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.854 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.855 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.116 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.061 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.062 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.062 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.062 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.321 00:19:15.321 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.321 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.321 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.582 { 00:19:15.582 "cntlid": 83, 00:19:15.582 "qid": 0, 00:19:15.582 "state": "enabled", 00:19:15.582 "thread": "nvmf_tgt_poll_group_000", 00:19:15.582 "listen_address": { 00:19:15.582 "trtype": "TCP", 00:19:15.582 "adrfam": "IPv4", 00:19:15.582 "traddr": "10.0.0.2", 00:19:15.582 "trsvcid": "4420" 00:19:15.582 }, 00:19:15.582 "peer_address": { 00:19:15.582 "trtype": "TCP", 00:19:15.582 "adrfam": "IPv4", 00:19:15.582 "traddr": "10.0.0.1", 00:19:15.582 "trsvcid": "40562" 00:19:15.582 }, 00:19:15.582 "auth": { 00:19:15.582 "state": "completed", 00:19:15.582 "digest": "sha384", 00:19:15.582 "dhgroup": "ffdhe6144" 00:19:15.582 } 00:19:15.582 } 00:19:15.582 ]' 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.582 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.842 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.842 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.842 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.842 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.881 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.141 00:19:17.141 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.141 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.142 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.403 { 00:19:17.403 "cntlid": 85, 00:19:17.403 "qid": 0, 00:19:17.403 "state": "enabled", 00:19:17.403 "thread": "nvmf_tgt_poll_group_000", 00:19:17.403 "listen_address": { 00:19:17.403 "trtype": "TCP", 00:19:17.403 "adrfam": "IPv4", 00:19:17.403 "traddr": "10.0.0.2", 00:19:17.403 "trsvcid": "4420" 00:19:17.403 }, 00:19:17.403 "peer_address": { 00:19:17.403 "trtype": "TCP", 00:19:17.403 "adrfam": "IPv4", 00:19:17.403 "traddr": "10.0.0.1", 00:19:17.403 "trsvcid": "40580" 00:19:17.403 }, 00:19:17.403 "auth": { 00:19:17.403 "state": "completed", 00:19:17.403 "digest": "sha384", 00:19:17.403 "dhgroup": "ffdhe6144" 00:19:17.403 } 00:19:17.403 } 00:19:17.403 ]' 00:19:17.403 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.403 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.665 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:18.609 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.609 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.609 11:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 11:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.610 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.870 00:19:18.870 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.870 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.870 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.131 { 00:19:19.131 "cntlid": 87, 00:19:19.131 "qid": 0, 00:19:19.131 "state": "enabled", 00:19:19.131 "thread": "nvmf_tgt_poll_group_000", 00:19:19.131 "listen_address": { 00:19:19.131 "trtype": "TCP", 00:19:19.131 "adrfam": "IPv4", 00:19:19.131 "traddr": "10.0.0.2", 00:19:19.131 "trsvcid": "4420" 00:19:19.131 }, 00:19:19.131 "peer_address": { 00:19:19.131 "trtype": "TCP", 00:19:19.131 "adrfam": "IPv4", 00:19:19.131 "traddr": "10.0.0.1", 00:19:19.131 "trsvcid": "40606" 00:19:19.131 }, 00:19:19.131 "auth": { 00:19:19.131 "state": "completed", 00:19:19.131 "digest": "sha384", 00:19:19.131 "dhgroup": "ffdhe6144" 00:19:19.131 } 00:19:19.131 } 00:19:19.131 ]' 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.131 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.392 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.335 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.907 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.907 { 00:19:20.907 "cntlid": 89, 00:19:20.907 "qid": 0, 00:19:20.907 "state": "enabled", 00:19:20.907 "thread": "nvmf_tgt_poll_group_000", 00:19:20.907 "listen_address": { 00:19:20.907 "trtype": "TCP", 00:19:20.907 "adrfam": "IPv4", 00:19:20.907 "traddr": "10.0.0.2", 00:19:20.907 "trsvcid": "4420" 00:19:20.907 }, 00:19:20.907 "peer_address": { 00:19:20.907 "trtype": "TCP", 00:19:20.907 "adrfam": "IPv4", 00:19:20.907 "traddr": "10.0.0.1", 00:19:20.907 "trsvcid": "39066" 00:19:20.907 }, 00:19:20.907 "auth": { 00:19:20.907 "state": "completed", 00:19:20.907 "digest": "sha384", 00:19:20.907 "dhgroup": "ffdhe8192" 00:19:20.907 } 00:19:20.907 } 00:19:20.907 ]' 00:19:20.907 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.168 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.428 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:21.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.999 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.999 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.000 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.000 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.000 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.000 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.260 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.831 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.831 11:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.092 { 00:19:23.092 "cntlid": 91, 00:19:23.092 "qid": 0, 00:19:23.092 "state": "enabled", 00:19:23.092 "thread": "nvmf_tgt_poll_group_000", 00:19:23.092 "listen_address": { 00:19:23.092 "trtype": "TCP", 00:19:23.092 "adrfam": "IPv4", 00:19:23.092 "traddr": "10.0.0.2", 00:19:23.092 "trsvcid": "4420" 00:19:23.092 }, 00:19:23.092 "peer_address": { 00:19:23.092 "trtype": "TCP", 00:19:23.092 "adrfam": "IPv4", 00:19:23.092 "traddr": "10.0.0.1", 00:19:23.092 "trsvcid": "39076" 00:19:23.092 }, 00:19:23.092 "auth": { 00:19:23.092 "state": "completed", 00:19:23.092 "digest": "sha384", 00:19:23.092 "dhgroup": "ffdhe8192" 00:19:23.092 } 00:19:23.092 } 00:19:23.092 ]' 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.092 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.353 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.924 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.185 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.755 00:19:24.755 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.755 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.755 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.016 { 00:19:25.016 "cntlid": 93, 00:19:25.016 "qid": 0, 00:19:25.016 "state": "enabled", 00:19:25.016 "thread": "nvmf_tgt_poll_group_000", 00:19:25.016 "listen_address": { 00:19:25.016 "trtype": "TCP", 00:19:25.016 "adrfam": "IPv4", 00:19:25.016 "traddr": "10.0.0.2", 00:19:25.016 "trsvcid": "4420" 00:19:25.016 }, 00:19:25.016 "peer_address": { 00:19:25.016 "trtype": "TCP", 00:19:25.016 "adrfam": "IPv4", 00:19:25.016 "traddr": "10.0.0.1", 00:19:25.016 "trsvcid": "39102" 00:19:25.016 }, 00:19:25.016 "auth": { 00:19:25.016 "state": "completed", 00:19:25.016 "digest": "sha384", 00:19:25.016 "dhgroup": "ffdhe8192" 00:19:25.016 } 00:19:25.016 } 00:19:25.016 ]' 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.016 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.276 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:25.846 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.107 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.679 00:19:26.679 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.679 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.679 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.940 { 00:19:26.940 "cntlid": 95, 00:19:26.940 "qid": 0, 00:19:26.940 "state": "enabled", 00:19:26.940 "thread": "nvmf_tgt_poll_group_000", 00:19:26.940 "listen_address": { 00:19:26.940 "trtype": "TCP", 00:19:26.940 "adrfam": "IPv4", 00:19:26.940 "traddr": "10.0.0.2", 00:19:26.940 "trsvcid": "4420" 00:19:26.940 }, 00:19:26.940 "peer_address": { 00:19:26.940 "trtype": "TCP", 00:19:26.940 "adrfam": "IPv4", 00:19:26.940 "traddr": "10.0.0.1", 00:19:26.940 "trsvcid": "39128" 00:19:26.940 }, 00:19:26.940 "auth": { 00:19:26.940 "state": "completed", 00:19:26.940 "digest": "sha384", 00:19:26.940 "dhgroup": "ffdhe8192" 00:19:26.940 } 00:19:26.940 } 00:19:26.940 ]' 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.940 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.201 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.771 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.032 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.293 00:19:28.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.554 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.554 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.554 11:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.555 { 00:19:28.555 "cntlid": 97, 00:19:28.555 "qid": 0, 00:19:28.555 "state": "enabled", 00:19:28.555 "thread": "nvmf_tgt_poll_group_000", 00:19:28.555 "listen_address": { 00:19:28.555 "trtype": "TCP", 00:19:28.555 "adrfam": "IPv4", 00:19:28.555 "traddr": "10.0.0.2", 00:19:28.555 "trsvcid": "4420" 00:19:28.555 }, 00:19:28.555 "peer_address": { 00:19:28.555 "trtype": "TCP", 00:19:28.555 "adrfam": "IPv4", 00:19:28.555 "traddr": "10.0.0.1", 00:19:28.555 "trsvcid": "39164" 00:19:28.555 }, 00:19:28.555 "auth": { 00:19:28.555 "state": "completed", 00:19:28.555 "digest": "sha512", 00:19:28.555 "dhgroup": "null" 00:19:28.555 } 00:19:28.555 } 00:19:28.555 ]' 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.555 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.815 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:29.387 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.387 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.387 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.387 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.647 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.647 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.647 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.647 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.648 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.908 00:19:29.908 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.908 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.908 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.169 { 00:19:30.169 "cntlid": 99, 00:19:30.169 "qid": 0, 00:19:30.169 "state": "enabled", 00:19:30.169 "thread": "nvmf_tgt_poll_group_000", 00:19:30.169 "listen_address": { 00:19:30.169 "trtype": "TCP", 00:19:30.169 "adrfam": "IPv4", 00:19:30.169 "traddr": "10.0.0.2", 00:19:30.169 "trsvcid": "4420" 00:19:30.169 }, 00:19:30.169 "peer_address": { 00:19:30.169 "trtype": "TCP", 00:19:30.169 "adrfam": "IPv4", 00:19:30.169 "traddr": "10.0.0.1", 00:19:30.169 "trsvcid": "43720" 00:19:30.169 }, 00:19:30.169 "auth": { 00:19:30.169 "state": "completed", 00:19:30.169 "digest": "sha512", 00:19:30.169 "dhgroup": "null" 00:19:30.169 } 00:19:30.169 } 00:19:30.169 ]' 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.169 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.430 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.372 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.671 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.671 { 00:19:31.671 "cntlid": 101, 00:19:31.671 "qid": 0, 00:19:31.671 "state": "enabled", 00:19:31.671 "thread": "nvmf_tgt_poll_group_000", 00:19:31.671 "listen_address": { 00:19:31.671 "trtype": "TCP", 00:19:31.671 "adrfam": "IPv4", 00:19:31.671 "traddr": "10.0.0.2", 00:19:31.671 "trsvcid": "4420" 00:19:31.671 }, 00:19:31.671 "peer_address": { 00:19:31.671 "trtype": "TCP", 00:19:31.671 "adrfam": "IPv4", 00:19:31.671 "traddr": "10.0.0.1", 00:19:31.671 "trsvcid": "43758" 00:19:31.671 }, 00:19:31.671 "auth": { 00:19:31.671 "state": "completed", 00:19:31.671 "digest": "sha512", 00:19:31.671 "dhgroup": "null" 00:19:31.671 } 00:19:31.671 } 00:19:31.671 ]' 00:19:31.671 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.932 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.193 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:32.764 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:33.024 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.025 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.286 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.286 { 00:19:33.286 "cntlid": 103, 00:19:33.286 "qid": 0, 00:19:33.286 "state": "enabled", 00:19:33.286 "thread": "nvmf_tgt_poll_group_000", 00:19:33.286 "listen_address": { 00:19:33.286 "trtype": "TCP", 00:19:33.286 "adrfam": "IPv4", 00:19:33.286 "traddr": "10.0.0.2", 00:19:33.286 "trsvcid": "4420" 00:19:33.286 }, 00:19:33.286 "peer_address": { 00:19:33.286 "trtype": "TCP", 00:19:33.286 "adrfam": "IPv4", 00:19:33.286 "traddr": "10.0.0.1", 00:19:33.286 "trsvcid": "43782" 00:19:33.286 }, 00:19:33.286 "auth": { 00:19:33.286 "state": "completed", 00:19:33.286 "digest": "sha512", 00:19:33.286 "dhgroup": "null" 00:19:33.286 } 00:19:33.286 } 00:19:33.286 ]' 00:19:33.286 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.547 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.547 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.547 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.547 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.547 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.547 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.547 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.547 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:34.489 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.489 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.748 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.009 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.009 { 00:19:35.009 "cntlid": 105, 00:19:35.009 "qid": 0, 00:19:35.009 "state": "enabled", 00:19:35.009 "thread": "nvmf_tgt_poll_group_000", 00:19:35.009 "listen_address": { 00:19:35.009 "trtype": "TCP", 00:19:35.009 "adrfam": "IPv4", 00:19:35.009 "traddr": "10.0.0.2", 00:19:35.009 "trsvcid": "4420" 00:19:35.009 }, 00:19:35.009 "peer_address": { 00:19:35.009 "trtype": "TCP", 00:19:35.009 "adrfam": "IPv4", 00:19:35.009 "traddr": "10.0.0.1", 00:19:35.009 "trsvcid": "43816" 00:19:35.009 }, 00:19:35.009 "auth": { 00:19:35.009 "state": "completed", 00:19:35.009 "digest": "sha512", 00:19:35.009 "dhgroup": "ffdhe2048" 00:19:35.009 } 00:19:35.009 } 00:19:35.009 ]' 00:19:35.009 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.269 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.528 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.145 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.404 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:36.404 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.404 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.404 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.404 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.404 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.405 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.405 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.405 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.405 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.405 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.405 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.665 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.665 11:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.666 11:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.666 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.666 { 00:19:36.666 "cntlid": 107, 00:19:36.666 "qid": 0, 00:19:36.666 "state": "enabled", 00:19:36.666 "thread": "nvmf_tgt_poll_group_000", 00:19:36.666 "listen_address": { 00:19:36.666 "trtype": "TCP", 00:19:36.666 "adrfam": "IPv4", 00:19:36.666 "traddr": "10.0.0.2", 00:19:36.666 "trsvcid": "4420" 00:19:36.666 }, 00:19:36.666 "peer_address": { 00:19:36.666 "trtype": "TCP", 00:19:36.666 "adrfam": "IPv4", 00:19:36.666 "traddr": "10.0.0.1", 00:19:36.666 "trsvcid": "43850" 00:19:36.666 }, 00:19:36.666 "auth": { 00:19:36.666 "state": "completed", 00:19:36.666 "digest": "sha512", 00:19:36.666 "dhgroup": "ffdhe2048" 00:19:36.666 } 00:19:36.666 } 00:19:36.666 ]' 00:19:36.666 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.926 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.866 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.867 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.867 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.867 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.127 00:19:38.127 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.127 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.127 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.388 { 00:19:38.388 "cntlid": 109, 00:19:38.388 "qid": 0, 00:19:38.388 "state": "enabled", 00:19:38.388 "thread": "nvmf_tgt_poll_group_000", 00:19:38.388 "listen_address": { 00:19:38.388 "trtype": "TCP", 00:19:38.388 "adrfam": "IPv4", 00:19:38.388 "traddr": "10.0.0.2", 00:19:38.388 "trsvcid": "4420" 00:19:38.388 }, 00:19:38.388 "peer_address": { 00:19:38.388 "trtype": "TCP", 00:19:38.388 "adrfam": "IPv4", 00:19:38.388 "traddr": "10.0.0.1", 00:19:38.388 "trsvcid": "43860" 00:19:38.388 }, 00:19:38.388 "auth": { 00:19:38.388 "state": "completed", 00:19:38.388 "digest": "sha512", 00:19:38.388 "dhgroup": "ffdhe2048" 00:19:38.388 } 00:19:38.388 } 00:19:38.388 ]' 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.388 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.388 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.388 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.388 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.388 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.388 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.649 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:39.592 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.592 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.853 00:19:39.853 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.853 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.853 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.114 { 00:19:40.114 "cntlid": 111, 00:19:40.114 "qid": 0, 00:19:40.114 "state": "enabled", 00:19:40.114 "thread": "nvmf_tgt_poll_group_000", 00:19:40.114 "listen_address": { 00:19:40.114 "trtype": "TCP", 00:19:40.114 "adrfam": "IPv4", 00:19:40.114 "traddr": "10.0.0.2", 00:19:40.114 "trsvcid": "4420" 00:19:40.114 }, 00:19:40.114 "peer_address": { 00:19:40.114 "trtype": "TCP", 00:19:40.114 "adrfam": "IPv4", 00:19:40.114 "traddr": "10.0.0.1", 00:19:40.114 "trsvcid": "53622" 00:19:40.114 }, 00:19:40.114 "auth": { 00:19:40.114 "state": "completed", 00:19:40.114 "digest": "sha512", 00:19:40.114 "dhgroup": "ffdhe2048" 00:19:40.114 } 00:19:40.114 } 00:19:40.114 ]' 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.114 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.375 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.973 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.234 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.235 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.495 00:19:41.495 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.495 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.495 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.495 { 00:19:41.495 "cntlid": 113, 00:19:41.495 "qid": 0, 00:19:41.495 "state": "enabled", 00:19:41.495 "thread": "nvmf_tgt_poll_group_000", 00:19:41.495 "listen_address": { 00:19:41.495 "trtype": "TCP", 00:19:41.495 "adrfam": "IPv4", 00:19:41.495 "traddr": "10.0.0.2", 00:19:41.495 "trsvcid": "4420" 00:19:41.495 }, 00:19:41.495 "peer_address": { 00:19:41.495 "trtype": "TCP", 00:19:41.495 "adrfam": "IPv4", 00:19:41.495 "traddr": "10.0.0.1", 00:19:41.495 "trsvcid": "53648" 00:19:41.495 }, 00:19:41.495 "auth": { 00:19:41.495 "state": "completed", 00:19:41.495 "digest": "sha512", 00:19:41.495 "dhgroup": "ffdhe3072" 00:19:41.495 } 00:19:41.495 } 00:19:41.495 ]' 00:19:41.495 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.755 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.705 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.965 00:19:42.965 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.965 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.965 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.226 { 00:19:43.226 "cntlid": 115, 00:19:43.226 "qid": 0, 00:19:43.226 "state": "enabled", 00:19:43.226 "thread": "nvmf_tgt_poll_group_000", 00:19:43.226 "listen_address": { 00:19:43.226 "trtype": "TCP", 00:19:43.226 "adrfam": "IPv4", 00:19:43.226 "traddr": "10.0.0.2", 00:19:43.226 "trsvcid": "4420" 00:19:43.226 }, 00:19:43.226 "peer_address": { 00:19:43.226 "trtype": "TCP", 00:19:43.226 "adrfam": "IPv4", 00:19:43.226 "traddr": "10.0.0.1", 00:19:43.226 "trsvcid": "53684" 00:19:43.226 }, 00:19:43.226 "auth": { 00:19:43.226 "state": "completed", 00:19:43.226 "digest": "sha512", 00:19:43.226 "dhgroup": "ffdhe3072" 00:19:43.226 } 00:19:43.226 } 00:19:43.226 ]' 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.226 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.486 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.428 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.688 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.688 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.948 { 00:19:44.948 "cntlid": 117, 00:19:44.948 "qid": 0, 00:19:44.948 "state": "enabled", 00:19:44.948 "thread": "nvmf_tgt_poll_group_000", 00:19:44.948 "listen_address": { 00:19:44.948 "trtype": "TCP", 00:19:44.948 "adrfam": "IPv4", 00:19:44.948 "traddr": "10.0.0.2", 00:19:44.948 "trsvcid": "4420" 00:19:44.948 }, 00:19:44.948 "peer_address": { 00:19:44.948 "trtype": "TCP", 00:19:44.948 "adrfam": "IPv4", 00:19:44.948 "traddr": "10.0.0.1", 00:19:44.948 "trsvcid": "53720" 00:19:44.948 }, 00:19:44.948 "auth": { 00:19:44.948 "state": "completed", 00:19:44.948 "digest": "sha512", 00:19:44.948 "dhgroup": "ffdhe3072" 00:19:44.948 } 00:19:44.948 } 00:19:44.948 ]' 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.948 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.209 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:45.781 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.041 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.304 00:19:46.304 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.304 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.304 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.624 { 00:19:46.624 "cntlid": 119, 00:19:46.624 "qid": 0, 00:19:46.624 "state": "enabled", 00:19:46.624 "thread": "nvmf_tgt_poll_group_000", 00:19:46.624 "listen_address": { 00:19:46.624 "trtype": "TCP", 00:19:46.624 "adrfam": "IPv4", 00:19:46.624 "traddr": "10.0.0.2", 00:19:46.624 "trsvcid": "4420" 00:19:46.624 }, 00:19:46.624 "peer_address": { 00:19:46.624 "trtype": "TCP", 00:19:46.624 "adrfam": "IPv4", 00:19:46.624 "traddr": "10.0.0.1", 00:19:46.624 "trsvcid": "53746" 00:19:46.624 }, 00:19:46.624 "auth": { 00:19:46.624 "state": "completed", 00:19:46.624 "digest": "sha512", 00:19:46.624 "dhgroup": "ffdhe3072" 00:19:46.624 } 00:19:46.624 } 00:19:46.624 ]' 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.624 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.884 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.454 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.455 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.715 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.716 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.976 00:19:47.976 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.976 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.976 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.237 { 00:19:48.237 "cntlid": 121, 00:19:48.237 "qid": 0, 00:19:48.237 "state": "enabled", 00:19:48.237 "thread": "nvmf_tgt_poll_group_000", 00:19:48.237 "listen_address": { 00:19:48.237 "trtype": "TCP", 00:19:48.237 "adrfam": "IPv4", 00:19:48.237 "traddr": "10.0.0.2", 00:19:48.237 "trsvcid": "4420" 00:19:48.237 }, 00:19:48.237 "peer_address": { 00:19:48.237 "trtype": "TCP", 00:19:48.237 "adrfam": "IPv4", 00:19:48.237 "traddr": "10.0.0.1", 00:19:48.237 "trsvcid": "53780" 00:19:48.237 }, 00:19:48.237 "auth": { 00:19:48.237 "state": "completed", 00:19:48.237 "digest": "sha512", 00:19:48.237 "dhgroup": "ffdhe4096" 00:19:48.237 } 00:19:48.237 } 00:19:48.237 ]' 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.237 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.497 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.068 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.327 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.588 00:19:49.588 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.588 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.588 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.848 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.848 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.848 11:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.848 11:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.849 { 00:19:49.849 "cntlid": 123, 00:19:49.849 "qid": 0, 00:19:49.849 "state": "enabled", 00:19:49.849 "thread": "nvmf_tgt_poll_group_000", 00:19:49.849 "listen_address": { 00:19:49.849 "trtype": "TCP", 00:19:49.849 "adrfam": "IPv4", 00:19:49.849 "traddr": "10.0.0.2", 00:19:49.849 "trsvcid": "4420" 00:19:49.849 }, 00:19:49.849 "peer_address": { 00:19:49.849 "trtype": "TCP", 00:19:49.849 "adrfam": "IPv4", 00:19:49.849 "traddr": "10.0.0.1", 00:19:49.849 "trsvcid": "39480" 00:19:49.849 }, 00:19:49.849 "auth": { 00:19:49.849 "state": "completed", 00:19:49.849 "digest": "sha512", 00:19:49.849 "dhgroup": "ffdhe4096" 00:19:49.849 } 00:19:49.849 } 00:19:49.849 ]' 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.849 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.108 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.049 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.309 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.309 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.309 11:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.568 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.568 { 00:19:51.568 "cntlid": 125, 00:19:51.568 "qid": 0, 00:19:51.568 "state": "enabled", 00:19:51.568 "thread": "nvmf_tgt_poll_group_000", 00:19:51.568 "listen_address": { 00:19:51.568 "trtype": "TCP", 00:19:51.568 "adrfam": "IPv4", 00:19:51.568 "traddr": "10.0.0.2", 00:19:51.568 "trsvcid": "4420" 00:19:51.568 }, 00:19:51.568 "peer_address": { 00:19:51.568 "trtype": "TCP", 00:19:51.568 "adrfam": "IPv4", 00:19:51.568 "traddr": "10.0.0.1", 00:19:51.568 "trsvcid": "39510" 00:19:51.568 }, 00:19:51.568 "auth": { 00:19:51.568 "state": "completed", 00:19:51.568 "digest": "sha512", 00:19:51.568 "dhgroup": "ffdhe4096" 00:19:51.568 } 00:19:51.568 } 00:19:51.568 ]' 00:19:51.568 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.568 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.568 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.568 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.569 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.569 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.569 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.569 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.828 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.398 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.658 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.659 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.919 00:19:52.919 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.919 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.919 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.179 { 00:19:53.179 "cntlid": 127, 00:19:53.179 "qid": 0, 00:19:53.179 "state": "enabled", 00:19:53.179 "thread": "nvmf_tgt_poll_group_000", 00:19:53.179 "listen_address": { 00:19:53.179 "trtype": "TCP", 00:19:53.179 "adrfam": "IPv4", 00:19:53.179 "traddr": "10.0.0.2", 00:19:53.179 "trsvcid": "4420" 00:19:53.179 }, 00:19:53.179 "peer_address": { 00:19:53.179 "trtype": "TCP", 00:19:53.179 "adrfam": "IPv4", 00:19:53.179 "traddr": "10.0.0.1", 00:19:53.179 "trsvcid": "39550" 00:19:53.179 }, 00:19:53.179 "auth": { 00:19:53.179 "state": "completed", 00:19:53.179 "digest": "sha512", 00:19:53.179 "dhgroup": "ffdhe4096" 00:19:53.179 } 00:19:53.179 } 00:19:53.179 ]' 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.179 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.439 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.010 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.270 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.530 00:19:54.530 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.530 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.530 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.790 { 00:19:54.790 "cntlid": 129, 00:19:54.790 "qid": 0, 00:19:54.790 "state": "enabled", 00:19:54.790 "thread": "nvmf_tgt_poll_group_000", 00:19:54.790 "listen_address": { 00:19:54.790 "trtype": "TCP", 00:19:54.790 "adrfam": "IPv4", 00:19:54.790 "traddr": "10.0.0.2", 00:19:54.790 "trsvcid": "4420" 00:19:54.790 }, 00:19:54.790 "peer_address": { 00:19:54.790 "trtype": "TCP", 00:19:54.790 "adrfam": "IPv4", 00:19:54.790 "traddr": "10.0.0.1", 00:19:54.790 "trsvcid": "39566" 00:19:54.790 }, 00:19:54.790 "auth": { 00:19:54.790 "state": "completed", 00:19:54.790 "digest": "sha512", 00:19:54.790 "dhgroup": "ffdhe6144" 00:19:54.790 } 00:19:54.790 } 00:19:54.790 ]' 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.790 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.050 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.050 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.050 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.050 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.990 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.991 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.251 00:19:56.251 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.251 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.251 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.511 { 00:19:56.511 "cntlid": 131, 00:19:56.511 "qid": 0, 00:19:56.511 "state": "enabled", 00:19:56.511 "thread": "nvmf_tgt_poll_group_000", 00:19:56.511 "listen_address": { 00:19:56.511 "trtype": "TCP", 00:19:56.511 "adrfam": "IPv4", 00:19:56.511 "traddr": "10.0.0.2", 00:19:56.511 "trsvcid": "4420" 00:19:56.511 }, 00:19:56.511 "peer_address": { 00:19:56.511 "trtype": "TCP", 00:19:56.511 "adrfam": "IPv4", 00:19:56.511 "traddr": "10.0.0.1", 00:19:56.511 "trsvcid": "39596" 00:19:56.511 }, 00:19:56.511 "auth": { 00:19:56.511 "state": "completed", 00:19:56.511 "digest": "sha512", 00:19:56.511 "dhgroup": "ffdhe6144" 00:19:56.511 } 00:19:56.511 } 00:19:56.511 ]' 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.511 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.772 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.772 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.772 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.772 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.714 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.002 00:19:58.002 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.002 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.002 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.263 { 00:19:58.263 "cntlid": 133, 00:19:58.263 "qid": 0, 00:19:58.263 "state": "enabled", 00:19:58.263 "thread": "nvmf_tgt_poll_group_000", 00:19:58.263 "listen_address": { 00:19:58.263 "trtype": "TCP", 00:19:58.263 "adrfam": "IPv4", 00:19:58.263 "traddr": "10.0.0.2", 00:19:58.263 "trsvcid": "4420" 00:19:58.263 }, 00:19:58.263 "peer_address": { 00:19:58.263 "trtype": "TCP", 00:19:58.263 "adrfam": "IPv4", 00:19:58.263 "traddr": "10.0.0.1", 00:19:58.263 "trsvcid": "39620" 00:19:58.263 }, 00:19:58.263 "auth": { 00:19:58.263 "state": "completed", 00:19:58.263 "digest": "sha512", 00:19:58.263 "dhgroup": "ffdhe6144" 00:19:58.263 } 00:19:58.263 } 00:19:58.263 ]' 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.263 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.524 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.524 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.524 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.524 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:59.466 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.466 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.728 00:19:59.728 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.728 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.728 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.989 { 00:19:59.989 "cntlid": 135, 00:19:59.989 "qid": 0, 00:19:59.989 "state": "enabled", 00:19:59.989 "thread": "nvmf_tgt_poll_group_000", 00:19:59.989 "listen_address": { 00:19:59.989 "trtype": "TCP", 00:19:59.989 "adrfam": "IPv4", 00:19:59.989 "traddr": "10.0.0.2", 00:19:59.989 "trsvcid": "4420" 00:19:59.989 }, 00:19:59.989 "peer_address": { 00:19:59.989 "trtype": "TCP", 00:19:59.989 "adrfam": "IPv4", 00:19:59.989 "traddr": "10.0.0.1", 00:19:59.989 "trsvcid": "49320" 00:19:59.989 }, 00:19:59.989 "auth": { 00:19:59.989 "state": "completed", 00:19:59.989 "digest": "sha512", 00:19:59.989 "dhgroup": "ffdhe6144" 00:19:59.989 } 00:19:59.989 } 00:19:59.989 ]' 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.989 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.249 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.250 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.250 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.250 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.203 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.834 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.834 { 00:20:01.834 "cntlid": 137, 00:20:01.834 "qid": 0, 00:20:01.834 "state": "enabled", 00:20:01.834 "thread": "nvmf_tgt_poll_group_000", 00:20:01.834 "listen_address": { 00:20:01.834 "trtype": "TCP", 00:20:01.834 "adrfam": "IPv4", 00:20:01.834 "traddr": "10.0.0.2", 00:20:01.834 "trsvcid": "4420" 00:20:01.834 }, 00:20:01.834 "peer_address": { 00:20:01.834 "trtype": "TCP", 00:20:01.834 "adrfam": "IPv4", 00:20:01.834 "traddr": "10.0.0.1", 00:20:01.834 "trsvcid": "49338" 00:20:01.834 }, 00:20:01.834 "auth": { 00:20:01.834 "state": "completed", 00:20:01.834 "digest": "sha512", 00:20:01.834 "dhgroup": "ffdhe8192" 00:20:01.834 } 00:20:01.834 } 00:20:01.834 ]' 00:20:01.834 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.095 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.356 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.927 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.187 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.759 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.759 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.759 { 00:20:03.759 "cntlid": 139, 00:20:03.759 "qid": 0, 00:20:03.759 "state": "enabled", 00:20:03.759 "thread": "nvmf_tgt_poll_group_000", 00:20:03.759 "listen_address": { 00:20:03.759 "trtype": "TCP", 00:20:03.759 "adrfam": "IPv4", 00:20:03.759 "traddr": "10.0.0.2", 00:20:03.759 "trsvcid": "4420" 00:20:03.759 }, 00:20:03.759 "peer_address": { 00:20:03.759 "trtype": "TCP", 00:20:03.759 "adrfam": "IPv4", 00:20:03.760 "traddr": "10.0.0.1", 00:20:03.760 "trsvcid": "49364" 00:20:03.760 }, 00:20:03.760 "auth": { 00:20:03.760 "state": "completed", 00:20:03.760 "digest": "sha512", 00:20:03.760 "dhgroup": "ffdhe8192" 00:20:03.760 } 00:20:03.760 } 00:20:03.760 ]' 00:20:03.760 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.020 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.281 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NDRlZjc5ZGJlYzY5MWM1MjEzZTBiYTY0OTZlMmU2YTPpkw+v: --dhchap-ctrl-secret DHHC-1:02:ZWQ3NjA0NTdjNDE3MjU2ZTY5MGI2YzI4YTQ3NzMzMjc3NDhjNTM2NDQ0NjEzY2E4JTvMtw==: 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:04.852 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.113 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.684 00:20:05.684 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.684 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.684 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.684 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.684 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.684 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.685 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.685 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.685 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.685 { 00:20:05.685 "cntlid": 141, 00:20:05.685 "qid": 0, 00:20:05.685 "state": "enabled", 00:20:05.685 "thread": "nvmf_tgt_poll_group_000", 00:20:05.685 "listen_address": { 00:20:05.685 "trtype": "TCP", 00:20:05.685 "adrfam": "IPv4", 00:20:05.685 "traddr": "10.0.0.2", 00:20:05.685 "trsvcid": "4420" 00:20:05.685 }, 00:20:05.685 "peer_address": { 00:20:05.685 "trtype": "TCP", 00:20:05.685 "adrfam": "IPv4", 00:20:05.685 "traddr": "10.0.0.1", 00:20:05.685 "trsvcid": "49390" 00:20:05.685 }, 00:20:05.685 "auth": { 00:20:05.685 "state": "completed", 00:20:05.685 "digest": "sha512", 00:20:05.685 "dhgroup": "ffdhe8192" 00:20:05.685 } 00:20:05.685 } 00:20:05.685 ]' 00:20:05.685 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.685 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.685 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.945 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.945 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.945 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.945 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.945 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.945 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MDNhYTgwMzIzZDc0MmNhNzRkOGUwOTMwODI4NTFmYzg2YmQ0NzA5Nzg5ZTk0NDFm9lQsYQ==: --dhchap-ctrl-secret DHHC-1:01:ZDkzYzRjODhlMGI1YmM3MjM5NmU0N2RjN2Q1NjcyZDT0j6Fg: 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.888 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.460 00:20:07.460 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.460 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.460 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.721 { 00:20:07.721 "cntlid": 143, 00:20:07.721 "qid": 0, 00:20:07.721 "state": "enabled", 00:20:07.721 "thread": "nvmf_tgt_poll_group_000", 00:20:07.721 "listen_address": { 00:20:07.721 "trtype": "TCP", 00:20:07.721 "adrfam": "IPv4", 00:20:07.721 "traddr": "10.0.0.2", 00:20:07.721 "trsvcid": "4420" 00:20:07.721 }, 00:20:07.721 "peer_address": { 00:20:07.721 "trtype": "TCP", 00:20:07.721 "adrfam": "IPv4", 00:20:07.721 "traddr": "10.0.0.1", 00:20:07.721 "trsvcid": "49416" 00:20:07.721 }, 00:20:07.721 "auth": { 00:20:07.721 "state": "completed", 00:20:07.721 "digest": "sha512", 00:20:07.721 "dhgroup": "ffdhe8192" 00:20:07.721 } 00:20:07.721 } 00:20:07.721 ]' 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.721 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.982 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.926 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.497 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.497 { 00:20:09.497 "cntlid": 145, 00:20:09.497 "qid": 0, 00:20:09.497 "state": "enabled", 00:20:09.497 "thread": "nvmf_tgt_poll_group_000", 00:20:09.497 "listen_address": { 00:20:09.497 "trtype": "TCP", 00:20:09.497 "adrfam": "IPv4", 00:20:09.497 "traddr": "10.0.0.2", 00:20:09.497 "trsvcid": "4420" 00:20:09.497 }, 00:20:09.497 "peer_address": { 00:20:09.497 "trtype": "TCP", 00:20:09.497 "adrfam": "IPv4", 00:20:09.497 "traddr": "10.0.0.1", 00:20:09.497 "trsvcid": "43320" 00:20:09.497 }, 00:20:09.497 "auth": { 00:20:09.497 "state": "completed", 00:20:09.497 "digest": "sha512", 00:20:09.497 "dhgroup": "ffdhe8192" 00:20:09.497 } 00:20:09.497 } 00:20:09.497 ]' 00:20:09.497 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.759 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.020 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzQ2ZTFiOGI3NGI3MDQyYTEzMDYwNmY0ZTIxMDI1MmM0YWI0NTI2OGQzNjhhZTNh4MSdMw==: --dhchap-ctrl-secret DHHC-1:03:MTQ2M2UzZDUwMGVmMWE3N2NiYjJhZDRmZWM3ZmNmNDBmY2ExNWYzZGJiZjJjMDZkN2E3YjlhYWZiN2ZiMWJmMqSK4cM=: 00:20:10.592 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:10.593 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.164 request: 00:20:11.164 { 00:20:11.164 "name": "nvme0", 00:20:11.164 "trtype": "tcp", 00:20:11.164 "traddr": "10.0.0.2", 00:20:11.164 "adrfam": "ipv4", 00:20:11.164 "trsvcid": "4420", 00:20:11.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.164 "prchk_reftag": false, 00:20:11.164 "prchk_guard": false, 00:20:11.164 "hdgst": false, 00:20:11.164 "ddgst": false, 00:20:11.164 "dhchap_key": "key2", 00:20:11.164 "method": "bdev_nvme_attach_controller", 00:20:11.164 "req_id": 1 00:20:11.164 } 00:20:11.164 Got JSON-RPC error response 00:20:11.164 response: 00:20:11.164 { 00:20:11.164 "code": -5, 00:20:11.164 "message": "Input/output error" 00:20:11.164 } 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:11.164 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:11.736 request: 00:20:11.736 { 00:20:11.736 "name": "nvme0", 00:20:11.736 "trtype": "tcp", 00:20:11.736 "traddr": "10.0.0.2", 00:20:11.736 "adrfam": "ipv4", 00:20:11.736 "trsvcid": "4420", 00:20:11.736 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.736 "prchk_reftag": false, 00:20:11.736 "prchk_guard": false, 00:20:11.736 "hdgst": false, 00:20:11.736 "ddgst": false, 00:20:11.736 "dhchap_key": "key1", 00:20:11.736 "dhchap_ctrlr_key": "ckey2", 00:20:11.736 "method": "bdev_nvme_attach_controller", 00:20:11.736 "req_id": 1 00:20:11.736 } 00:20:11.736 Got JSON-RPC error response 00:20:11.736 response: 00:20:11.736 { 00:20:11.736 "code": -5, 00:20:11.736 "message": "Input/output error" 00:20:11.736 } 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.736 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.309 request: 00:20:12.309 { 00:20:12.309 "name": "nvme0", 00:20:12.309 "trtype": "tcp", 00:20:12.309 "traddr": "10.0.0.2", 00:20:12.309 "adrfam": "ipv4", 00:20:12.309 "trsvcid": "4420", 00:20:12.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:12.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:12.309 "prchk_reftag": false, 00:20:12.309 "prchk_guard": false, 00:20:12.309 "hdgst": false, 00:20:12.309 "ddgst": false, 00:20:12.309 "dhchap_key": "key1", 00:20:12.309 "dhchap_ctrlr_key": "ckey1", 00:20:12.309 "method": "bdev_nvme_attach_controller", 00:20:12.309 "req_id": 1 00:20:12.309 } 00:20:12.309 Got JSON-RPC error response 00:20:12.309 response: 00:20:12.309 { 00:20:12.309 "code": -5, 00:20:12.309 "message": "Input/output error" 00:20:12.309 } 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3542697 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3542697 ']' 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3542697 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3542697 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3542697' 00:20:12.309 killing process with pid 3542697 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3542697 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3542697 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3569656 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3569656 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3569656 ']' 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.309 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.252 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3569656 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3569656 ']' 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.253 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.513 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.513 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:13.513 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:13.513 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.513 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.513 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:13.514 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.514 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.514 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.514 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.514 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.086 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.086 { 00:20:14.086 "cntlid": 1, 00:20:14.086 "qid": 0, 00:20:14.086 "state": "enabled", 00:20:14.086 "thread": "nvmf_tgt_poll_group_000", 00:20:14.086 "listen_address": { 00:20:14.086 "trtype": "TCP", 00:20:14.086 "adrfam": "IPv4", 00:20:14.086 "traddr": "10.0.0.2", 00:20:14.086 "trsvcid": "4420" 00:20:14.086 }, 00:20:14.086 "peer_address": { 00:20:14.086 "trtype": "TCP", 00:20:14.086 "adrfam": "IPv4", 00:20:14.086 "traddr": "10.0.0.1", 00:20:14.086 "trsvcid": "43374" 00:20:14.086 }, 00:20:14.086 "auth": { 00:20:14.086 "state": "completed", 00:20:14.086 "digest": "sha512", 00:20:14.086 "dhgroup": "ffdhe8192" 00:20:14.086 } 00:20:14.086 } 00:20:14.086 ]' 00:20:14.086 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.346 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.607 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzNkMTg1YmI1NTc1N2RmOTgxZDExNDdhZGI1NDhkYzM0MzEzYTExNDg4OGU4YjQ4YzFlY2I1NzZkMjZjZmIxMJ7vT24=: 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:15.203 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.463 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.463 request: 00:20:15.463 { 00:20:15.463 "name": "nvme0", 00:20:15.463 "trtype": "tcp", 00:20:15.463 "traddr": "10.0.0.2", 00:20:15.463 "adrfam": "ipv4", 00:20:15.463 "trsvcid": "4420", 00:20:15.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:15.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:15.463 "prchk_reftag": false, 00:20:15.463 "prchk_guard": false, 00:20:15.463 "hdgst": false, 00:20:15.463 "ddgst": false, 00:20:15.463 "dhchap_key": "key3", 00:20:15.464 "method": "bdev_nvme_attach_controller", 00:20:15.464 "req_id": 1 00:20:15.464 } 00:20:15.464 Got JSON-RPC error response 00:20:15.464 response: 00:20:15.464 { 00:20:15.464 "code": -5, 00:20:15.464 "message": "Input/output error" 00:20:15.464 } 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.724 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.984 request: 00:20:15.984 { 00:20:15.984 "name": "nvme0", 00:20:15.984 "trtype": "tcp", 00:20:15.984 "traddr": "10.0.0.2", 00:20:15.984 "adrfam": "ipv4", 00:20:15.984 "trsvcid": "4420", 00:20:15.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:15.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:15.984 "prchk_reftag": false, 00:20:15.984 "prchk_guard": false, 00:20:15.984 "hdgst": false, 00:20:15.984 "ddgst": false, 00:20:15.984 "dhchap_key": "key3", 00:20:15.984 "method": "bdev_nvme_attach_controller", 00:20:15.984 "req_id": 1 00:20:15.984 } 00:20:15.984 Got JSON-RPC error response 00:20:15.984 response: 00:20:15.984 { 00:20:15.984 "code": -5, 00:20:15.984 "message": "Input/output error" 00:20:15.984 } 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.984 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:16.244 request: 00:20:16.244 { 00:20:16.244 "name": "nvme0", 00:20:16.244 "trtype": "tcp", 00:20:16.244 "traddr": "10.0.0.2", 00:20:16.244 "adrfam": "ipv4", 00:20:16.244 "trsvcid": "4420", 00:20:16.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:16.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:16.244 "prchk_reftag": false, 00:20:16.244 "prchk_guard": false, 00:20:16.244 "hdgst": false, 00:20:16.244 "ddgst": false, 00:20:16.244 "dhchap_key": "key0", 00:20:16.244 "dhchap_ctrlr_key": "key1", 00:20:16.244 "method": "bdev_nvme_attach_controller", 00:20:16.244 "req_id": 1 00:20:16.244 } 00:20:16.244 Got JSON-RPC error response 00:20:16.244 response: 00:20:16.244 { 00:20:16.244 "code": -5, 00:20:16.244 "message": "Input/output error" 00:20:16.244 } 00:20:16.244 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:16.244 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.244 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.244 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.244 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.244 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.540 00:20:16.540 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:16.540 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:16.540 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.540 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.540 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.540 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3542906 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3542906 ']' 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3542906 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3542906 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3542906' 00:20:16.800 killing process with pid 3542906 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3542906 00:20:16.800 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3542906 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.061 rmmod nvme_tcp 00:20:17.061 rmmod nvme_fabrics 00:20:17.061 rmmod nvme_keyring 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3569656 ']' 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3569656 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3569656 ']' 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3569656 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3569656 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3569656' 00:20:17.061 killing process with pid 3569656 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3569656 00:20:17.061 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3569656 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.321 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.865 11:31:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.865 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Z8W /tmp/spdk.key-sha256.tMz /tmp/spdk.key-sha384.eV6 /tmp/spdk.key-sha512.StJ /tmp/spdk.key-sha512.Sm7 /tmp/spdk.key-sha384.t4n /tmp/spdk.key-sha256.RI3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:19.865 00:20:19.865 real 2m23.958s 00:20:19.865 user 5m20.390s 00:20:19.865 sys 0m21.183s 00:20:19.865 11:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.865 11:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.865 ************************************ 00:20:19.865 END TEST nvmf_auth_target 00:20:19.865 ************************************ 00:20:19.865 11:31:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:19.865 11:31:47 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:19.865 11:31:47 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:19.865 11:31:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:19.865 11:31:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.865 11:31:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.865 ************************************ 00:20:19.865 START TEST nvmf_bdevio_no_huge 00:20:19.865 ************************************ 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:19.865 * Looking for test storage... 00:20:19.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.865 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.866 11:31:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:26.457 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:26.457 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:26.457 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:26.457 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.457 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.458 11:31:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.458 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.458 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.458 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.458 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:20:26.719 00:20:26.719 --- 10.0.0.2 ping statistics --- 00:20:26.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.719 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:20:26.719 00:20:26.719 --- 10.0.0.1 ping statistics --- 00:20:26.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.719 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3574704 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3574704 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3574704 ']' 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.719 11:31:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:26.719 [2024-07-15 11:31:55.360168] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:20:26.719 [2024-07-15 11:31:55.360222] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:26.980 [2024-07-15 11:31:55.447491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.980 [2024-07-15 11:31:55.548299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.980 [2024-07-15 11:31:55.548350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.980 [2024-07-15 11:31:55.548359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.980 [2024-07-15 11:31:55.548367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.980 [2024-07-15 11:31:55.548374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.980 [2024-07-15 11:31:55.548568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.980 [2024-07-15 11:31:55.548745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:26.980 [2024-07-15 11:31:55.548934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:26.980 [2024-07-15 11:31:55.548934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.553 [2024-07-15 11:31:56.195359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.553 Malloc0 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.553 [2024-07-15 11:31:56.237139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.553 { 00:20:27.553 "params": { 00:20:27.553 "name": "Nvme$subsystem", 00:20:27.553 "trtype": "$TEST_TRANSPORT", 00:20:27.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.553 "adrfam": "ipv4", 00:20:27.553 "trsvcid": "$NVMF_PORT", 00:20:27.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.553 "hdgst": ${hdgst:-false}, 00:20:27.553 "ddgst": ${ddgst:-false} 00:20:27.553 }, 00:20:27.553 "method": "bdev_nvme_attach_controller" 00:20:27.553 } 00:20:27.553 EOF 00:20:27.553 )") 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:27.553 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:27.814 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:27.814 11:31:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.814 "params": { 00:20:27.814 "name": "Nvme1", 00:20:27.814 "trtype": "tcp", 00:20:27.814 "traddr": "10.0.0.2", 00:20:27.814 "adrfam": "ipv4", 00:20:27.814 "trsvcid": "4420", 00:20:27.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.814 "hdgst": false, 00:20:27.814 "ddgst": false 00:20:27.814 }, 00:20:27.814 "method": "bdev_nvme_attach_controller" 00:20:27.814 }' 00:20:27.814 [2024-07-15 11:31:56.267686] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:20:27.814 [2024-07-15 11:31:56.267740] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3574959 ] 00:20:27.814 [2024-07-15 11:31:56.321330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:27.814 [2024-07-15 11:31:56.415421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.814 [2024-07-15 11:31:56.415540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.814 [2024-07-15 11:31:56.415542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.074 I/O targets: 00:20:28.074 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:28.074 00:20:28.074 00:20:28.074 CUnit - A unit testing framework for C - Version 2.1-3 00:20:28.074 http://cunit.sourceforge.net/ 00:20:28.074 00:20:28.074 00:20:28.074 Suite: bdevio tests on: Nvme1n1 00:20:28.074 Test: blockdev write read block ...passed 00:20:28.074 Test: blockdev write zeroes read block ...passed 00:20:28.074 Test: blockdev write zeroes read no split ...passed 00:20:28.074 Test: blockdev write zeroes read split ...passed 00:20:28.334 Test: blockdev write zeroes read split partial ...passed 00:20:28.334 Test: blockdev reset ...[2024-07-15 11:31:56.783503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:28.334 [2024-07-15 11:31:56.783559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb9c10 (9): Bad file descriptor 00:20:28.334 [2024-07-15 11:31:56.798221] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:28.334 passed 00:20:28.334 Test: blockdev write read 8 blocks ...passed 00:20:28.334 Test: blockdev write read size > 128k ...passed 00:20:28.334 Test: blockdev write read invalid size ...passed 00:20:28.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:28.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:28.334 Test: blockdev write read max offset ...passed 00:20:28.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:28.334 Test: blockdev writev readv 8 blocks ...passed 00:20:28.334 Test: blockdev writev readv 30 x 1block ...passed 00:20:28.334 Test: blockdev writev readv block ...passed 00:20:28.334 Test: blockdev writev readv size > 128k ...passed 00:20:28.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:28.334 Test: blockdev comparev and writev ...[2024-07-15 11:31:57.025304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.334 [2024-07-15 11:31:57.025329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:28.334 [2024-07-15 11:31:57.025340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.334 [2024-07-15 11:31:57.025347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:28.334 [2024-07-15 11:31:57.025827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.334 [2024-07-15 11:31:57.025836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:28.335 [2024-07-15 11:31:57.025846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.335 [2024-07-15 11:31:57.025852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:28.335 [2024-07-15 11:31:57.026317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.335 [2024-07-15 11:31:57.026326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:28.335 [2024-07-15 11:31:57.026335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.335 [2024-07-15 11:31:57.026340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:28.335 [2024-07-15 11:31:57.026871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.335 [2024-07-15 11:31:57.026879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:28.335 [2024-07-15 11:31:57.026888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:28.335 [2024-07-15 11:31:57.026894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:28.595 passed 00:20:28.595 Test: blockdev nvme passthru rw ...passed 00:20:28.595 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:31:57.112833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.595 [2024-07-15 11:31:57.112845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:28.595 [2024-07-15 11:31:57.113163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.595 [2024-07-15 11:31:57.113171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:28.595 [2024-07-15 11:31:57.113630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.595 [2024-07-15 11:31:57.113638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:28.595 [2024-07-15 11:31:57.114051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:28.595 [2024-07-15 11:31:57.114059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:28.595 passed 00:20:28.595 Test: blockdev nvme admin passthru ...passed 00:20:28.595 Test: blockdev copy ...passed 00:20:28.595 00:20:28.595 Run Summary: Type Total Ran Passed Failed Inactive 00:20:28.595 suites 1 1 n/a 0 0 00:20:28.595 tests 23 23 23 0 0 00:20:28.595 asserts 152 152 152 0 n/a 00:20:28.595 00:20:28.595 Elapsed time = 1.142 seconds 00:20:28.855 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.855 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.856 rmmod nvme_tcp 00:20:28.856 rmmod nvme_fabrics 00:20:28.856 rmmod nvme_keyring 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3574704 ']' 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3574704 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3574704 ']' 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3574704 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3574704 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3574704' 00:20:28.856 killing process with pid 3574704 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3574704 00:20:28.856 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3574704 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.116 11:31:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.660 11:31:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.660 00:20:31.660 real 0m11.826s 00:20:31.660 user 0m12.830s 00:20:31.660 sys 0m6.193s 00:20:31.660 11:31:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.660 11:31:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 ************************************ 00:20:31.660 END TEST nvmf_bdevio_no_huge 00:20:31.660 ************************************ 00:20:31.660 11:31:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:31.660 11:31:59 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:31.660 11:31:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:31.660 11:31:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.660 11:31:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.660 ************************************ 00:20:31.660 START TEST nvmf_tls 00:20:31.660 ************************************ 00:20:31.660 11:31:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:31.660 * Looking for test storage... 00:20:31.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.660 11:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:38.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:38.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:38.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:38.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:20:38.242 00:20:38.242 --- 10.0.0.2 ping statistics --- 00:20:38.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.242 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:20:38.242 00:20:38.242 --- 10.0.0.1 ping statistics --- 00:20:38.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.242 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.242 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3579333 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3579333 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3579333 ']' 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.243 11:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.243 [2024-07-15 11:32:06.818434] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:20:38.243 [2024-07-15 11:32:06.818483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.243 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.243 [2024-07-15 11:32:06.901240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.503 [2024-07-15 11:32:06.964088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.503 [2024-07-15 11:32:06.964128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.503 [2024-07-15 11:32:06.964136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.503 [2024-07-15 11:32:06.964142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.503 [2024-07-15 11:32:06.964148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.503 [2024-07-15 11:32:06.964168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:39.075 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:39.336 true 00:20:39.336 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:39.336 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:39.336 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:39.336 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:39.336 11:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:39.596 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:39.596 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:39.857 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:39.857 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:39.857 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:39.857 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:39.857 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:40.117 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:40.380 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:40.380 11:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:40.709 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:40.709 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:40.709 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:40.709 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:40.709 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.6YjmDvFiWj 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9Tug8djNuY 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.6YjmDvFiWj 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9Tug8djNuY 00:20:40.969 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:41.230 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:41.230 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.6YjmDvFiWj 00:20:41.230 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6YjmDvFiWj 00:20:41.230 11:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.490 [2024-07-15 11:32:10.035813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.490 11:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.751 11:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.751 [2024-07-15 11:32:10.348561] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.751 [2024-07-15 11:32:10.348770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.751 11:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.012 malloc0 00:20:42.012 11:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.012 11:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6YjmDvFiWj 00:20:42.273 [2024-07-15 11:32:10.795449] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:42.273 11:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.6YjmDvFiWj 00:20:42.273 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.274 Initializing NVMe Controllers 00:20:52.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.274 Initialization complete. Launching workers. 00:20:52.274 ======================================================== 00:20:52.274 Latency(us) 00:20:52.274 Device Information : IOPS MiB/s Average min max 00:20:52.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19071.70 74.50 3355.76 1059.82 6127.07 00:20:52.274 ======================================================== 00:20:52.274 Total : 19071.70 74.50 3355.76 1059.82 6127.07 00:20:52.274 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6YjmDvFiWj 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6YjmDvFiWj' 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3582124 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3582124 /var/tmp/bdevperf.sock 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3582124 ']' 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.274 11:32:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.274 [2024-07-15 11:32:20.959389] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:20:52.274 [2024-07-15 11:32:20.959447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582124 ] 00:20:52.535 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.535 [2024-07-15 11:32:21.009527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.535 [2024-07-15 11:32:21.061416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.107 11:32:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.107 11:32:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:53.107 11:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6YjmDvFiWj 00:20:53.367 [2024-07-15 11:32:21.858190] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.367 [2024-07-15 11:32:21.858258] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:53.367 TLSTESTn1 00:20:53.367 11:32:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:53.367 Running I/O for 10 seconds... 00:21:05.604 00:21:05.604 Latency(us) 00:21:05.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.604 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.604 Verification LBA range: start 0x0 length 0x2000 00:21:05.604 TLSTESTn1 : 10.04 2843.27 11.11 0.00 0.00 44921.84 5106.35 116217.17 00:21:05.604 =================================================================================================================== 00:21:05.604 Total : 2843.27 11.11 0.00 0.00 44921.84 5106.35 116217.17 00:21:05.604 0 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3582124 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3582124 ']' 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3582124 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3582124 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3582124' 00:21:05.604 killing process with pid 3582124 00:21:05.604 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3582124 00:21:05.604 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.604 00:21:05.604 Latency(us) 00:21:05.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.604 =================================================================================================================== 00:21:05.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.605 [2024-07-15 11:32:32.191372] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3582124 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Tug8djNuY 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Tug8djNuY 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Tug8djNuY 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9Tug8djNuY' 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3584229 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3584229 /var/tmp/bdevperf.sock 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3584229 ']' 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.605 11:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.605 [2024-07-15 11:32:32.356801] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:05.605 [2024-07-15 11:32:32.356857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584229 ] 00:21:05.605 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.605 [2024-07-15 11:32:32.406568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.605 [2024-07-15 11:32:32.459253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Tug8djNuY 00:21:05.605 [2024-07-15 11:32:33.268175] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.605 [2024-07-15 11:32:33.268246] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:05.605 [2024-07-15 11:32:33.279252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:05.605 [2024-07-15 11:32:33.279338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d2ec0 (107): Transport endpoint is not connected 00:21:05.605 [2024-07-15 11:32:33.280303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d2ec0 (9): Bad file descriptor 00:21:05.605 [2024-07-15 11:32:33.281305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.605 [2024-07-15 11:32:33.281312] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:05.605 [2024-07-15 11:32:33.281320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.605 request: 00:21:05.605 { 00:21:05.605 "name": "TLSTEST", 00:21:05.605 "trtype": "tcp", 00:21:05.605 "traddr": "10.0.0.2", 00:21:05.605 "adrfam": "ipv4", 00:21:05.605 "trsvcid": "4420", 00:21:05.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.605 "prchk_reftag": false, 00:21:05.605 "prchk_guard": false, 00:21:05.605 "hdgst": false, 00:21:05.605 "ddgst": false, 00:21:05.605 "psk": "/tmp/tmp.9Tug8djNuY", 00:21:05.605 "method": "bdev_nvme_attach_controller", 00:21:05.605 "req_id": 1 00:21:05.605 } 00:21:05.605 Got JSON-RPC error response 00:21:05.605 response: 00:21:05.605 { 00:21:05.605 "code": -5, 00:21:05.605 "message": "Input/output error" 00:21:05.605 } 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3584229 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3584229 ']' 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3584229 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584229 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584229' 00:21:05.605 killing process with pid 3584229 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3584229 00:21:05.605 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.605 00:21:05.605 Latency(us) 00:21:05.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.605 =================================================================================================================== 00:21:05.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.605 [2024-07-15 11:32:33.366546] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3584229 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6YjmDvFiWj 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6YjmDvFiWj 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6YjmDvFiWj 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6YjmDvFiWj' 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3584487 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3584487 /var/tmp/bdevperf.sock 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3584487 ']' 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.605 11:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.605 [2024-07-15 11:32:33.522551] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:05.605 [2024-07-15 11:32:33.522603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584487 ] 00:21:05.605 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.605 [2024-07-15 11:32:33.572658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.605 [2024-07-15 11:32:33.624157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.605 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.605 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.605 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.6YjmDvFiWj 00:21:05.867 [2024-07-15 11:32:34.441138] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.867 [2024-07-15 11:32:34.441203] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:05.867 [2024-07-15 11:32:34.452031] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:05.867 [2024-07-15 11:32:34.452051] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:05.867 [2024-07-15 11:32:34.452071] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:05.867 [2024-07-15 11:32:34.453140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x837ec0 (107): Transport endpoint is not connected 00:21:05.867 [2024-07-15 11:32:34.454135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x837ec0 (9): Bad file descriptor 00:21:05.867 [2024-07-15 11:32:34.455137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.867 [2024-07-15 11:32:34.455144] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:05.867 [2024-07-15 11:32:34.455152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.867 request: 00:21:05.867 { 00:21:05.867 "name": "TLSTEST", 00:21:05.867 "trtype": "tcp", 00:21:05.867 "traddr": "10.0.0.2", 00:21:05.867 "adrfam": "ipv4", 00:21:05.867 "trsvcid": "4420", 00:21:05.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.867 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.867 "prchk_reftag": false, 00:21:05.867 "prchk_guard": false, 00:21:05.867 "hdgst": false, 00:21:05.867 "ddgst": false, 00:21:05.867 "psk": "/tmp/tmp.6YjmDvFiWj", 00:21:05.867 "method": "bdev_nvme_attach_controller", 00:21:05.867 "req_id": 1 00:21:05.867 } 00:21:05.867 Got JSON-RPC error response 00:21:05.867 response: 00:21:05.867 { 00:21:05.867 "code": -5, 00:21:05.867 "message": "Input/output error" 00:21:05.867 } 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3584487 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3584487 ']' 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3584487 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584487 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584487' 00:21:05.867 killing process with pid 3584487 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3584487 00:21:05.867 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.867 00:21:05.867 Latency(us) 00:21:05.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.867 =================================================================================================================== 00:21:05.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.867 [2024-07-15 11:32:34.544313] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:05.867 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3584487 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6YjmDvFiWj 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6YjmDvFiWj 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:06.128 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6YjmDvFiWj 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6YjmDvFiWj' 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3584824 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3584824 /var/tmp/bdevperf.sock 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3584824 ']' 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.129 11:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.129 [2024-07-15 11:32:34.712473] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:06.129 [2024-07-15 11:32:34.712525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584824 ] 00:21:06.129 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.129 [2024-07-15 11:32:34.762499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.129 [2024-07-15 11:32:34.814012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6YjmDvFiWj 00:21:07.071 [2024-07-15 11:32:35.610882] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.071 [2024-07-15 11:32:35.610949] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:07.071 [2024-07-15 11:32:35.615133] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:07.071 [2024-07-15 11:32:35.615154] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:07.071 [2024-07-15 11:32:35.615175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:07.071 [2024-07-15 11:32:35.615790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1bec0 (107): Transport endpoint is not connected 00:21:07.071 [2024-07-15 11:32:35.616785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1bec0 (9): Bad file descriptor 00:21:07.071 [2024-07-15 11:32:35.617787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:07.071 [2024-07-15 11:32:35.617795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:07.071 [2024-07-15 11:32:35.617803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:07.071 request: 00:21:07.071 { 00:21:07.071 "name": "TLSTEST", 00:21:07.071 "trtype": "tcp", 00:21:07.071 "traddr": "10.0.0.2", 00:21:07.071 "adrfam": "ipv4", 00:21:07.071 "trsvcid": "4420", 00:21:07.071 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.071 "prchk_reftag": false, 00:21:07.071 "prchk_guard": false, 00:21:07.071 "hdgst": false, 00:21:07.071 "ddgst": false, 00:21:07.071 "psk": "/tmp/tmp.6YjmDvFiWj", 00:21:07.071 "method": "bdev_nvme_attach_controller", 00:21:07.071 "req_id": 1 00:21:07.071 } 00:21:07.071 Got JSON-RPC error response 00:21:07.071 response: 00:21:07.071 { 00:21:07.071 "code": -5, 00:21:07.071 "message": "Input/output error" 00:21:07.071 } 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3584824 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3584824 ']' 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3584824 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584824 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584824' 00:21:07.071 killing process with pid 3584824 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3584824 00:21:07.071 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.071 00:21:07.071 Latency(us) 00:21:07.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.071 =================================================================================================================== 00:21:07.071 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:07.071 [2024-07-15 11:32:35.702267] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:07.071 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3584824 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3584950 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3584950 /var/tmp/bdevperf.sock 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3584950 ']' 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.332 11:32:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 [2024-07-15 11:32:35.860177] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:07.332 [2024-07-15 11:32:35.860235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584950 ] 00:21:07.332 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.332 [2024-07-15 11:32:35.910316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.332 [2024-07-15 11:32:35.962695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:08.275 [2024-07-15 11:32:36.777702] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:08.275 [2024-07-15 11:32:36.779404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130b4a0 (9): Bad file descriptor 00:21:08.275 [2024-07-15 11:32:36.780403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:08.275 [2024-07-15 11:32:36.780411] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:08.275 [2024-07-15 11:32:36.780419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:08.275 request: 00:21:08.275 { 00:21:08.275 "name": "TLSTEST", 00:21:08.275 "trtype": "tcp", 00:21:08.275 "traddr": "10.0.0.2", 00:21:08.275 "adrfam": "ipv4", 00:21:08.275 "trsvcid": "4420", 00:21:08.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.275 "prchk_reftag": false, 00:21:08.275 "prchk_guard": false, 00:21:08.275 "hdgst": false, 00:21:08.275 "ddgst": false, 00:21:08.275 "method": "bdev_nvme_attach_controller", 00:21:08.275 "req_id": 1 00:21:08.275 } 00:21:08.275 Got JSON-RPC error response 00:21:08.275 response: 00:21:08.275 { 00:21:08.275 "code": -5, 00:21:08.275 "message": "Input/output error" 00:21:08.275 } 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3584950 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3584950 ']' 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3584950 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584950 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584950' 00:21:08.275 killing process with pid 3584950 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3584950 00:21:08.275 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.275 00:21:08.275 Latency(us) 00:21:08.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.275 =================================================================================================================== 00:21:08.275 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3584950 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3579333 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3579333 ']' 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3579333 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:08.275 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.536 11:32:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3579333 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3579333' 00:21:08.536 killing process with pid 3579333 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3579333 00:21:08.536 [2024-07-15 11:32:37.025679] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3579333 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.WeQqYigGT9 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.WeQqYigGT9 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3585198 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3585198 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3585198 ']' 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.536 11:32:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.797 [2024-07-15 11:32:37.255786] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:08.797 [2024-07-15 11:32:37.255843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.797 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.797 [2024-07-15 11:32:37.339533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.797 [2024-07-15 11:32:37.394244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.797 [2024-07-15 11:32:37.394275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.797 [2024-07-15 11:32:37.394283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.797 [2024-07-15 11:32:37.394287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.797 [2024-07-15 11:32:37.394292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.797 [2024-07-15 11:32:37.394308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.WeQqYigGT9 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WeQqYigGT9 00:21:09.369 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.629 [2024-07-15 11:32:38.200386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.629 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:09.893 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:09.893 [2024-07-15 11:32:38.493092] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.893 [2024-07-15 11:32:38.493289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.893 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.214 malloc0 00:21:10.214 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:10.214 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:10.474 [2024-07-15 11:32:38.919912] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WeQqYigGT9 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WeQqYigGT9' 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3585563 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3585563 /var/tmp/bdevperf.sock 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3585563 ']' 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.474 11:32:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.474 [2024-07-15 11:32:38.970925] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:10.474 [2024-07-15 11:32:38.971015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585563 ] 00:21:10.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.474 [2024-07-15 11:32:39.026840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.474 [2024-07-15 11:32:39.079922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.417 11:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.417 11:32:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:11.417 11:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:11.417 [2024-07-15 11:32:39.896846] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.417 [2024-07-15 11:32:39.896905] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:11.417 TLSTESTn1 00:21:11.417 11:32:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:11.417 Running I/O for 10 seconds... 00:21:23.656 00:21:23.656 Latency(us) 00:21:23.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.656 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:23.656 Verification LBA range: start 0x0 length 0x2000 00:21:23.656 TLSTESTn1 : 10.06 3576.26 13.97 0.00 0.00 35682.51 7372.80 58982.40 00:21:23.656 =================================================================================================================== 00:21:23.656 Total : 3576.26 13.97 0.00 0.00 35682.51 7372.80 58982.40 00:21:23.656 0 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3585563 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3585563 ']' 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3585563 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3585563 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3585563' 00:21:23.656 killing process with pid 3585563 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3585563 00:21:23.656 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.656 00:21:23.656 Latency(us) 00:21:23.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.656 =================================================================================================================== 00:21:23.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.656 [2024-07-15 11:32:50.243287] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3585563 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.WeQqYigGT9 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WeQqYigGT9 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WeQqYigGT9 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WeQqYigGT9 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WeQqYigGT9' 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3587898 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3587898 /var/tmp/bdevperf.sock 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3587898 ']' 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.656 11:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.656 [2024-07-15 11:32:50.412896] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:23.656 [2024-07-15 11:32:50.412947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3587898 ] 00:21:23.656 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.656 [2024-07-15 11:32:50.462899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.656 [2024-07-15 11:32:50.514927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:23.656 [2024-07-15 11:32:51.331918] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.656 [2024-07-15 11:32:51.331964] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:23.656 [2024-07-15 11:32:51.331969] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.WeQqYigGT9 00:21:23.656 request: 00:21:23.656 { 00:21:23.656 "name": "TLSTEST", 00:21:23.656 "trtype": "tcp", 00:21:23.656 "traddr": "10.0.0.2", 00:21:23.656 "adrfam": "ipv4", 00:21:23.656 "trsvcid": "4420", 00:21:23.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.656 "prchk_reftag": false, 00:21:23.656 "prchk_guard": false, 00:21:23.656 "hdgst": false, 00:21:23.656 "ddgst": false, 00:21:23.656 "psk": "/tmp/tmp.WeQqYigGT9", 00:21:23.656 "method": "bdev_nvme_attach_controller", 00:21:23.656 "req_id": 1 00:21:23.656 } 00:21:23.656 Got JSON-RPC error response 00:21:23.656 response: 00:21:23.656 { 00:21:23.656 "code": -1, 00:21:23.656 "message": "Operation not permitted" 00:21:23.656 } 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3587898 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3587898 ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3587898 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3587898 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3587898' 00:21:23.656 killing process with pid 3587898 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3587898 00:21:23.656 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.656 00:21:23.656 Latency(us) 00:21:23.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.656 =================================================================================================================== 00:21:23.656 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3587898 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3585198 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3585198 ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3585198 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3585198 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3585198' 00:21:23.656 killing process with pid 3585198 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3585198 00:21:23.656 [2024-07-15 11:32:51.578383] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3585198 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3588159 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3588159 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3588159 ']' 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.656 11:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.656 [2024-07-15 11:32:51.754908] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:23.656 [2024-07-15 11:32:51.754962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.656 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.656 [2024-07-15 11:32:51.834979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.656 [2024-07-15 11:32:51.893472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.656 [2024-07-15 11:32:51.893508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.656 [2024-07-15 11:32:51.893513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.656 [2024-07-15 11:32:51.893518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.656 [2024-07-15 11:32:51.893522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.656 [2024-07-15 11:32:51.893543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.WeQqYigGT9 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WeQqYigGT9 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.WeQqYigGT9 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WeQqYigGT9 00:21:23.916 11:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.177 [2024-07-15 11:32:52.699885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.177 11:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.177 11:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.437 [2024-07-15 11:32:52.996611] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.437 [2024-07-15 11:32:52.996790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.437 11:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.697 malloc0 00:21:24.697 11:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.697 11:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:24.956 [2024-07-15 11:32:53.431544] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:24.956 [2024-07-15 11:32:53.431562] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:24.956 [2024-07-15 11:32:53.431582] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:24.956 request: 00:21:24.956 { 00:21:24.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.956 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.956 "psk": "/tmp/tmp.WeQqYigGT9", 00:21:24.956 "method": "nvmf_subsystem_add_host", 00:21:24.956 "req_id": 1 00:21:24.956 } 00:21:24.956 Got JSON-RPC error response 00:21:24.956 response: 00:21:24.956 { 00:21:24.956 "code": -32603, 00:21:24.956 "message": "Internal error" 00:21:24.956 } 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3588159 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3588159 ']' 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3588159 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3588159 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.956 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3588159' 00:21:24.957 killing process with pid 3588159 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3588159 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3588159 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.WeQqYigGT9 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3588612 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3588612 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3588612 ']' 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.957 11:32:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.217 [2024-07-15 11:32:53.682406] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:25.217 [2024-07-15 11:32:53.682456] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.217 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.217 [2024-07-15 11:32:53.764230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.217 [2024-07-15 11:32:53.818034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.217 [2024-07-15 11:32:53.818069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.217 [2024-07-15 11:32:53.818075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.217 [2024-07-15 11:32:53.818080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.217 [2024-07-15 11:32:53.818084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.217 [2024-07-15 11:32:53.818101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.WeQqYigGT9 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WeQqYigGT9 00:21:25.787 11:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:26.048 [2024-07-15 11:32:54.619873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.048 11:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:26.309 11:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:26.309 [2024-07-15 11:32:54.932629] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.310 [2024-07-15 11:32:54.932822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.310 11:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:26.571 malloc0 00:21:26.571 11:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:26.571 11:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:26.831 [2024-07-15 11:32:55.395526] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3588973 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3588973 /var/tmp/bdevperf.sock 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3588973 ']' 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.831 11:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.831 [2024-07-15 11:32:55.457983] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:26.831 [2024-07-15 11:32:55.458037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588973 ] 00:21:26.831 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.831 [2024-07-15 11:32:55.508072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.091 [2024-07-15 11:32:55.559509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.661 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.661 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.661 11:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:27.661 [2024-07-15 11:32:56.360182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.661 [2024-07-15 11:32:56.360252] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.921 TLSTESTn1 00:21:27.921 11:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:28.182 11:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:28.182 "subsystems": [ 00:21:28.182 { 00:21:28.182 "subsystem": "keyring", 00:21:28.182 "config": [] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "iobuf", 00:21:28.182 "config": [ 00:21:28.182 { 00:21:28.182 "method": "iobuf_set_options", 00:21:28.182 "params": { 00:21:28.182 "small_pool_count": 8192, 00:21:28.182 "large_pool_count": 1024, 00:21:28.182 "small_bufsize": 8192, 00:21:28.182 "large_bufsize": 135168 00:21:28.182 } 00:21:28.182 } 00:21:28.182 ] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "sock", 00:21:28.182 "config": [ 00:21:28.182 { 00:21:28.182 "method": "sock_set_default_impl", 00:21:28.182 "params": { 00:21:28.182 "impl_name": "posix" 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "sock_impl_set_options", 00:21:28.182 "params": { 00:21:28.182 "impl_name": "ssl", 00:21:28.182 "recv_buf_size": 4096, 00:21:28.182 "send_buf_size": 4096, 00:21:28.182 "enable_recv_pipe": true, 00:21:28.182 "enable_quickack": false, 00:21:28.182 "enable_placement_id": 0, 00:21:28.182 "enable_zerocopy_send_server": true, 00:21:28.182 "enable_zerocopy_send_client": false, 00:21:28.182 "zerocopy_threshold": 0, 00:21:28.182 "tls_version": 0, 00:21:28.182 "enable_ktls": false 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "sock_impl_set_options", 00:21:28.182 "params": { 00:21:28.182 "impl_name": "posix", 00:21:28.182 "recv_buf_size": 2097152, 00:21:28.182 "send_buf_size": 2097152, 00:21:28.182 "enable_recv_pipe": true, 00:21:28.182 "enable_quickack": false, 00:21:28.182 "enable_placement_id": 0, 00:21:28.182 "enable_zerocopy_send_server": true, 00:21:28.182 "enable_zerocopy_send_client": false, 00:21:28.182 "zerocopy_threshold": 0, 00:21:28.182 "tls_version": 0, 00:21:28.182 "enable_ktls": false 00:21:28.182 } 00:21:28.182 } 00:21:28.182 ] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "vmd", 00:21:28.182 "config": [] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "accel", 00:21:28.182 "config": [ 00:21:28.182 { 00:21:28.182 "method": "accel_set_options", 00:21:28.182 "params": { 00:21:28.182 "small_cache_size": 128, 00:21:28.182 "large_cache_size": 16, 00:21:28.182 "task_count": 2048, 00:21:28.182 "sequence_count": 2048, 00:21:28.182 "buf_count": 2048 00:21:28.182 } 00:21:28.182 } 00:21:28.182 ] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "bdev", 00:21:28.182 "config": [ 00:21:28.182 { 00:21:28.182 "method": "bdev_set_options", 00:21:28.182 "params": { 00:21:28.182 "bdev_io_pool_size": 65535, 00:21:28.182 "bdev_io_cache_size": 256, 00:21:28.182 "bdev_auto_examine": true, 00:21:28.182 "iobuf_small_cache_size": 128, 00:21:28.182 "iobuf_large_cache_size": 16 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "bdev_raid_set_options", 00:21:28.182 "params": { 00:21:28.182 "process_window_size_kb": 1024 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "bdev_iscsi_set_options", 00:21:28.182 "params": { 00:21:28.182 "timeout_sec": 30 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "bdev_nvme_set_options", 00:21:28.182 "params": { 00:21:28.182 "action_on_timeout": "none", 00:21:28.182 "timeout_us": 0, 00:21:28.182 "timeout_admin_us": 0, 00:21:28.182 "keep_alive_timeout_ms": 10000, 00:21:28.182 "arbitration_burst": 0, 00:21:28.182 "low_priority_weight": 0, 00:21:28.182 "medium_priority_weight": 0, 00:21:28.182 "high_priority_weight": 0, 00:21:28.182 "nvme_adminq_poll_period_us": 10000, 00:21:28.182 "nvme_ioq_poll_period_us": 0, 00:21:28.182 "io_queue_requests": 0, 00:21:28.182 "delay_cmd_submit": true, 00:21:28.182 "transport_retry_count": 4, 00:21:28.182 "bdev_retry_count": 3, 00:21:28.182 "transport_ack_timeout": 0, 00:21:28.182 "ctrlr_loss_timeout_sec": 0, 00:21:28.182 "reconnect_delay_sec": 0, 00:21:28.182 "fast_io_fail_timeout_sec": 0, 00:21:28.182 "disable_auto_failback": false, 00:21:28.182 "generate_uuids": false, 00:21:28.182 "transport_tos": 0, 00:21:28.182 "nvme_error_stat": false, 00:21:28.182 "rdma_srq_size": 0, 00:21:28.182 "io_path_stat": false, 00:21:28.182 "allow_accel_sequence": false, 00:21:28.182 "rdma_max_cq_size": 0, 00:21:28.182 "rdma_cm_event_timeout_ms": 0, 00:21:28.182 "dhchap_digests": [ 00:21:28.182 "sha256", 00:21:28.182 "sha384", 00:21:28.182 "sha512" 00:21:28.182 ], 00:21:28.182 "dhchap_dhgroups": [ 00:21:28.182 "null", 00:21:28.182 "ffdhe2048", 00:21:28.182 "ffdhe3072", 00:21:28.182 "ffdhe4096", 00:21:28.182 "ffdhe6144", 00:21:28.182 "ffdhe8192" 00:21:28.182 ] 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "bdev_nvme_set_hotplug", 00:21:28.182 "params": { 00:21:28.182 "period_us": 100000, 00:21:28.182 "enable": false 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "bdev_malloc_create", 00:21:28.182 "params": { 00:21:28.182 "name": "malloc0", 00:21:28.182 "num_blocks": 8192, 00:21:28.182 "block_size": 4096, 00:21:28.182 "physical_block_size": 4096, 00:21:28.182 "uuid": "d63951d8-9748-44f2-8c4f-d782bc59eff0", 00:21:28.182 "optimal_io_boundary": 0 00:21:28.182 } 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "method": "bdev_wait_for_examine" 00:21:28.182 } 00:21:28.182 ] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "nbd", 00:21:28.182 "config": [] 00:21:28.182 }, 00:21:28.182 { 00:21:28.182 "subsystem": "scheduler", 00:21:28.182 "config": [ 00:21:28.182 { 00:21:28.182 "method": "framework_set_scheduler", 00:21:28.182 "params": { 00:21:28.182 "name": "static" 00:21:28.182 } 00:21:28.182 } 00:21:28.182 ] 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "subsystem": "nvmf", 00:21:28.183 "config": [ 00:21:28.183 { 00:21:28.183 "method": "nvmf_set_config", 00:21:28.183 "params": { 00:21:28.183 "discovery_filter": "match_any", 00:21:28.183 "admin_cmd_passthru": { 00:21:28.183 "identify_ctrlr": false 00:21:28.183 } 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_set_max_subsystems", 00:21:28.183 "params": { 00:21:28.183 "max_subsystems": 1024 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_set_crdt", 00:21:28.183 "params": { 00:21:28.183 "crdt1": 0, 00:21:28.183 "crdt2": 0, 00:21:28.183 "crdt3": 0 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_create_transport", 00:21:28.183 "params": { 00:21:28.183 "trtype": "TCP", 00:21:28.183 "max_queue_depth": 128, 00:21:28.183 "max_io_qpairs_per_ctrlr": 127, 00:21:28.183 "in_capsule_data_size": 4096, 00:21:28.183 "max_io_size": 131072, 00:21:28.183 "io_unit_size": 131072, 00:21:28.183 "max_aq_depth": 128, 00:21:28.183 "num_shared_buffers": 511, 00:21:28.183 "buf_cache_size": 4294967295, 00:21:28.183 "dif_insert_or_strip": false, 00:21:28.183 "zcopy": false, 00:21:28.183 "c2h_success": false, 00:21:28.183 "sock_priority": 0, 00:21:28.183 "abort_timeout_sec": 1, 00:21:28.183 "ack_timeout": 0, 00:21:28.183 "data_wr_pool_size": 0 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_create_subsystem", 00:21:28.183 "params": { 00:21:28.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.183 "allow_any_host": false, 00:21:28.183 "serial_number": "SPDK00000000000001", 00:21:28.183 "model_number": "SPDK bdev Controller", 00:21:28.183 "max_namespaces": 10, 00:21:28.183 "min_cntlid": 1, 00:21:28.183 "max_cntlid": 65519, 00:21:28.183 "ana_reporting": false 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_subsystem_add_host", 00:21:28.183 "params": { 00:21:28.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.183 "host": "nqn.2016-06.io.spdk:host1", 00:21:28.183 "psk": "/tmp/tmp.WeQqYigGT9" 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_subsystem_add_ns", 00:21:28.183 "params": { 00:21:28.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.183 "namespace": { 00:21:28.183 "nsid": 1, 00:21:28.183 "bdev_name": "malloc0", 00:21:28.183 "nguid": "D63951D8974844F28C4FD782BC59EFF0", 00:21:28.183 "uuid": "d63951d8-9748-44f2-8c4f-d782bc59eff0", 00:21:28.183 "no_auto_visible": false 00:21:28.183 } 00:21:28.183 } 00:21:28.183 }, 00:21:28.183 { 00:21:28.183 "method": "nvmf_subsystem_add_listener", 00:21:28.183 "params": { 00:21:28.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.183 "listen_address": { 00:21:28.183 "trtype": "TCP", 00:21:28.183 "adrfam": "IPv4", 00:21:28.183 "traddr": "10.0.0.2", 00:21:28.183 "trsvcid": "4420" 00:21:28.183 }, 00:21:28.183 "secure_channel": true 00:21:28.183 } 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 } 00:21:28.183 ] 00:21:28.183 }' 00:21:28.183 11:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:28.444 11:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:28.444 "subsystems": [ 00:21:28.444 { 00:21:28.444 "subsystem": "keyring", 00:21:28.444 "config": [] 00:21:28.444 }, 00:21:28.444 { 00:21:28.444 "subsystem": "iobuf", 00:21:28.444 "config": [ 00:21:28.444 { 00:21:28.444 "method": "iobuf_set_options", 00:21:28.444 "params": { 00:21:28.444 "small_pool_count": 8192, 00:21:28.444 "large_pool_count": 1024, 00:21:28.444 "small_bufsize": 8192, 00:21:28.444 "large_bufsize": 135168 00:21:28.444 } 00:21:28.444 } 00:21:28.444 ] 00:21:28.444 }, 00:21:28.444 { 00:21:28.444 "subsystem": "sock", 00:21:28.444 "config": [ 00:21:28.444 { 00:21:28.444 "method": "sock_set_default_impl", 00:21:28.444 "params": { 00:21:28.444 "impl_name": "posix" 00:21:28.444 } 00:21:28.444 }, 00:21:28.444 { 00:21:28.444 "method": "sock_impl_set_options", 00:21:28.444 "params": { 00:21:28.444 "impl_name": "ssl", 00:21:28.444 "recv_buf_size": 4096, 00:21:28.444 "send_buf_size": 4096, 00:21:28.444 "enable_recv_pipe": true, 00:21:28.444 "enable_quickack": false, 00:21:28.444 "enable_placement_id": 0, 00:21:28.444 "enable_zerocopy_send_server": true, 00:21:28.444 "enable_zerocopy_send_client": false, 00:21:28.444 "zerocopy_threshold": 0, 00:21:28.444 "tls_version": 0, 00:21:28.444 "enable_ktls": false 00:21:28.444 } 00:21:28.444 }, 00:21:28.444 { 00:21:28.444 "method": "sock_impl_set_options", 00:21:28.444 "params": { 00:21:28.444 "impl_name": "posix", 00:21:28.444 "recv_buf_size": 2097152, 00:21:28.444 "send_buf_size": 2097152, 00:21:28.444 "enable_recv_pipe": true, 00:21:28.444 "enable_quickack": false, 00:21:28.444 "enable_placement_id": 0, 00:21:28.444 "enable_zerocopy_send_server": true, 00:21:28.444 "enable_zerocopy_send_client": false, 00:21:28.444 "zerocopy_threshold": 0, 00:21:28.444 "tls_version": 0, 00:21:28.444 "enable_ktls": false 00:21:28.444 } 00:21:28.444 } 00:21:28.444 ] 00:21:28.444 }, 00:21:28.444 { 00:21:28.444 "subsystem": "vmd", 00:21:28.444 "config": [] 00:21:28.444 }, 00:21:28.444 { 00:21:28.444 "subsystem": "accel", 00:21:28.444 "config": [ 00:21:28.444 { 00:21:28.444 "method": "accel_set_options", 00:21:28.444 "params": { 00:21:28.444 "small_cache_size": 128, 00:21:28.444 "large_cache_size": 16, 00:21:28.444 "task_count": 2048, 00:21:28.444 "sequence_count": 2048, 00:21:28.444 "buf_count": 2048 00:21:28.444 } 00:21:28.445 } 00:21:28.445 ] 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "subsystem": "bdev", 00:21:28.445 "config": [ 00:21:28.445 { 00:21:28.445 "method": "bdev_set_options", 00:21:28.445 "params": { 00:21:28.445 "bdev_io_pool_size": 65535, 00:21:28.445 "bdev_io_cache_size": 256, 00:21:28.445 "bdev_auto_examine": true, 00:21:28.445 "iobuf_small_cache_size": 128, 00:21:28.445 "iobuf_large_cache_size": 16 00:21:28.445 } 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "method": "bdev_raid_set_options", 00:21:28.445 "params": { 00:21:28.445 "process_window_size_kb": 1024 00:21:28.445 } 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "method": "bdev_iscsi_set_options", 00:21:28.445 "params": { 00:21:28.445 "timeout_sec": 30 00:21:28.445 } 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "method": "bdev_nvme_set_options", 00:21:28.445 "params": { 00:21:28.445 "action_on_timeout": "none", 00:21:28.445 "timeout_us": 0, 00:21:28.445 "timeout_admin_us": 0, 00:21:28.445 "keep_alive_timeout_ms": 10000, 00:21:28.445 "arbitration_burst": 0, 00:21:28.445 "low_priority_weight": 0, 00:21:28.445 "medium_priority_weight": 0, 00:21:28.445 "high_priority_weight": 0, 00:21:28.445 "nvme_adminq_poll_period_us": 10000, 00:21:28.445 "nvme_ioq_poll_period_us": 0, 00:21:28.445 "io_queue_requests": 512, 00:21:28.445 "delay_cmd_submit": true, 00:21:28.445 "transport_retry_count": 4, 00:21:28.445 "bdev_retry_count": 3, 00:21:28.445 "transport_ack_timeout": 0, 00:21:28.445 "ctrlr_loss_timeout_sec": 0, 00:21:28.445 "reconnect_delay_sec": 0, 00:21:28.445 "fast_io_fail_timeout_sec": 0, 00:21:28.445 "disable_auto_failback": false, 00:21:28.445 "generate_uuids": false, 00:21:28.445 "transport_tos": 0, 00:21:28.445 "nvme_error_stat": false, 00:21:28.445 "rdma_srq_size": 0, 00:21:28.445 "io_path_stat": false, 00:21:28.445 "allow_accel_sequence": false, 00:21:28.445 "rdma_max_cq_size": 0, 00:21:28.445 "rdma_cm_event_timeout_ms": 0, 00:21:28.445 "dhchap_digests": [ 00:21:28.445 "sha256", 00:21:28.445 "sha384", 00:21:28.445 "sha512" 00:21:28.445 ], 00:21:28.445 "dhchap_dhgroups": [ 00:21:28.445 "null", 00:21:28.445 "ffdhe2048", 00:21:28.445 "ffdhe3072", 00:21:28.445 "ffdhe4096", 00:21:28.445 "ffdhe6144", 00:21:28.445 "ffdhe8192" 00:21:28.445 ] 00:21:28.445 } 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "method": "bdev_nvme_attach_controller", 00:21:28.445 "params": { 00:21:28.445 "name": "TLSTEST", 00:21:28.445 "trtype": "TCP", 00:21:28.445 "adrfam": "IPv4", 00:21:28.445 "traddr": "10.0.0.2", 00:21:28.445 "trsvcid": "4420", 00:21:28.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.445 "prchk_reftag": false, 00:21:28.445 "prchk_guard": false, 00:21:28.445 "ctrlr_loss_timeout_sec": 0, 00:21:28.445 "reconnect_delay_sec": 0, 00:21:28.445 "fast_io_fail_timeout_sec": 0, 00:21:28.445 "psk": "/tmp/tmp.WeQqYigGT9", 00:21:28.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.445 "hdgst": false, 00:21:28.445 "ddgst": false 00:21:28.445 } 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "method": "bdev_nvme_set_hotplug", 00:21:28.445 "params": { 00:21:28.445 "period_us": 100000, 00:21:28.445 "enable": false 00:21:28.445 } 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "method": "bdev_wait_for_examine" 00:21:28.445 } 00:21:28.445 ] 00:21:28.445 }, 00:21:28.445 { 00:21:28.445 "subsystem": "nbd", 00:21:28.445 "config": [] 00:21:28.445 } 00:21:28.445 ] 00:21:28.445 }' 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3588973 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3588973 ']' 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3588973 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3588973 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3588973' 00:21:28.445 killing process with pid 3588973 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3588973 00:21:28.445 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.445 00:21:28.445 Latency(us) 00:21:28.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.445 =================================================================================================================== 00:21:28.445 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.445 [2024-07-15 11:32:56.986662] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:28.445 11:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3588973 00:21:28.445 11:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3588612 00:21:28.445 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3588612 ']' 00:21:28.445 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3588612 00:21:28.445 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.445 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.445 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3588612 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3588612' 00:21:28.707 killing process with pid 3588612 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3588612 00:21:28.707 [2024-07-15 11:32:57.154997] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3588612 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.707 11:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:28.707 "subsystems": [ 00:21:28.707 { 00:21:28.707 "subsystem": "keyring", 00:21:28.707 "config": [] 00:21:28.707 }, 00:21:28.707 { 00:21:28.707 "subsystem": "iobuf", 00:21:28.707 "config": [ 00:21:28.707 { 00:21:28.707 "method": "iobuf_set_options", 00:21:28.707 "params": { 00:21:28.707 "small_pool_count": 8192, 00:21:28.707 "large_pool_count": 1024, 00:21:28.707 "small_bufsize": 8192, 00:21:28.707 "large_bufsize": 135168 00:21:28.707 } 00:21:28.707 } 00:21:28.707 ] 00:21:28.707 }, 00:21:28.707 { 00:21:28.707 "subsystem": "sock", 00:21:28.707 "config": [ 00:21:28.707 { 00:21:28.707 "method": "sock_set_default_impl", 00:21:28.707 "params": { 00:21:28.707 "impl_name": "posix" 00:21:28.707 } 00:21:28.707 }, 00:21:28.707 { 00:21:28.707 "method": "sock_impl_set_options", 00:21:28.707 "params": { 00:21:28.707 "impl_name": "ssl", 00:21:28.707 "recv_buf_size": 4096, 00:21:28.707 "send_buf_size": 4096, 00:21:28.707 "enable_recv_pipe": true, 00:21:28.707 "enable_quickack": false, 00:21:28.707 "enable_placement_id": 0, 00:21:28.707 "enable_zerocopy_send_server": true, 00:21:28.707 "enable_zerocopy_send_client": false, 00:21:28.707 "zerocopy_threshold": 0, 00:21:28.707 "tls_version": 0, 00:21:28.707 "enable_ktls": false 00:21:28.707 } 00:21:28.707 }, 00:21:28.707 { 00:21:28.707 "method": "sock_impl_set_options", 00:21:28.707 "params": { 00:21:28.707 "impl_name": "posix", 00:21:28.707 "recv_buf_size": 2097152, 00:21:28.707 "send_buf_size": 2097152, 00:21:28.707 "enable_recv_pipe": true, 00:21:28.707 "enable_quickack": false, 00:21:28.707 "enable_placement_id": 0, 00:21:28.707 "enable_zerocopy_send_server": true, 00:21:28.707 "enable_zerocopy_send_client": false, 00:21:28.708 "zerocopy_threshold": 0, 00:21:28.708 "tls_version": 0, 00:21:28.708 "enable_ktls": false 00:21:28.708 } 00:21:28.708 } 00:21:28.708 ] 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "subsystem": "vmd", 00:21:28.708 "config": [] 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "subsystem": "accel", 00:21:28.708 "config": [ 00:21:28.708 { 00:21:28.708 "method": "accel_set_options", 00:21:28.708 "params": { 00:21:28.708 "small_cache_size": 128, 00:21:28.708 "large_cache_size": 16, 00:21:28.708 "task_count": 2048, 00:21:28.708 "sequence_count": 2048, 00:21:28.708 "buf_count": 2048 00:21:28.708 } 00:21:28.708 } 00:21:28.708 ] 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "subsystem": "bdev", 00:21:28.708 "config": [ 00:21:28.708 { 00:21:28.708 "method": "bdev_set_options", 00:21:28.708 "params": { 00:21:28.708 "bdev_io_pool_size": 65535, 00:21:28.708 "bdev_io_cache_size": 256, 00:21:28.708 "bdev_auto_examine": true, 00:21:28.708 "iobuf_small_cache_size": 128, 00:21:28.708 "iobuf_large_cache_size": 16 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "bdev_raid_set_options", 00:21:28.708 "params": { 00:21:28.708 "process_window_size_kb": 1024 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "bdev_iscsi_set_options", 00:21:28.708 "params": { 00:21:28.708 "timeout_sec": 30 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "bdev_nvme_set_options", 00:21:28.708 "params": { 00:21:28.708 "action_on_timeout": "none", 00:21:28.708 "timeout_us": 0, 00:21:28.708 "timeout_admin_us": 0, 00:21:28.708 "keep_alive_timeout_ms": 10000, 00:21:28.708 "arbitration_burst": 0, 00:21:28.708 "low_priority_weight": 0, 00:21:28.708 "medium_priority_weight": 0, 00:21:28.708 "high_priority_weight": 0, 00:21:28.708 "nvme_adminq_poll_period_us": 10000, 00:21:28.708 "nvme_ioq_poll_period_us": 0, 00:21:28.708 "io_queue_requests": 0, 00:21:28.708 "delay_cmd_submit": true, 00:21:28.708 "transport_retry_count": 4, 00:21:28.708 "bdev_retry_count": 3, 00:21:28.708 "transport_ack_timeout": 0, 00:21:28.708 "ctrlr_loss_timeout_sec": 0, 00:21:28.708 "reconnect_delay_sec": 0, 00:21:28.708 "fast_io_fail_timeout_sec": 0, 00:21:28.708 "disable_auto_failback": false, 00:21:28.708 "generate_uuids": false, 00:21:28.708 "transport_tos": 0, 00:21:28.708 "nvme_error_stat": false, 00:21:28.708 "rdma_srq_size": 0, 00:21:28.708 "io_path_stat": false, 00:21:28.708 "allow_accel_sequence": false, 00:21:28.708 "rdma_max_cq_size": 0, 00:21:28.708 "rdma_cm_event_timeout_ms": 0, 00:21:28.708 "dhchap_digests": [ 00:21:28.708 "sha256", 00:21:28.708 "sha384", 00:21:28.708 "sha512" 00:21:28.708 ], 00:21:28.708 "dhchap_dhgroups": [ 00:21:28.708 "null", 00:21:28.708 "ffdhe2048", 00:21:28.708 "ffdhe3072", 00:21:28.708 "ffdhe4096", 00:21:28.708 "ffdhe6144", 00:21:28.708 "ffdhe8192" 00:21:28.708 ] 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "bdev_nvme_set_hotplug", 00:21:28.708 "params": { 00:21:28.708 "period_us": 100000, 00:21:28.708 "enable": false 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "bdev_malloc_create", 00:21:28.708 "params": { 00:21:28.708 "name": "malloc0", 00:21:28.708 "num_blocks": 8192, 00:21:28.708 "block_size": 4096, 00:21:28.708 "physical_block_size": 4096, 00:21:28.708 "uuid": "d63951d8-9748-44f2-8c4f-d782bc59eff0", 00:21:28.708 "optimal_io_boundary": 0 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "bdev_wait_for_examine" 00:21:28.708 } 00:21:28.708 ] 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "subsystem": "nbd", 00:21:28.708 "config": [] 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "subsystem": "scheduler", 00:21:28.708 "config": [ 00:21:28.708 { 00:21:28.708 "method": "framework_set_scheduler", 00:21:28.708 "params": { 00:21:28.708 "name": "static" 00:21:28.708 } 00:21:28.708 } 00:21:28.708 ] 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "subsystem": "nvmf", 00:21:28.708 "config": [ 00:21:28.708 { 00:21:28.708 "method": "nvmf_set_config", 00:21:28.708 "params": { 00:21:28.708 "discovery_filter": "match_any", 00:21:28.708 "admin_cmd_passthru": { 00:21:28.708 "identify_ctrlr": false 00:21:28.708 } 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_set_max_subsystems", 00:21:28.708 "params": { 00:21:28.708 "max_subsystems": 1024 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_set_crdt", 00:21:28.708 "params": { 00:21:28.708 "crdt1": 0, 00:21:28.708 "crdt2": 0, 00:21:28.708 "crdt3": 0 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_create_transport", 00:21:28.708 "params": { 00:21:28.708 "trtype": "TCP", 00:21:28.708 "max_queue_depth": 128, 00:21:28.708 "max_io_qpairs_per_ctrlr": 127, 00:21:28.708 "in_capsule_data_size": 4096, 00:21:28.708 "max_io_size": 131072, 00:21:28.708 "io_unit_size": 131072, 00:21:28.708 "max_aq_depth": 128, 00:21:28.708 "num_shared_buffers": 511, 00:21:28.708 "buf_cache_size": 4294967295, 00:21:28.708 "dif_insert_or_strip": false, 00:21:28.708 "zcopy": false, 00:21:28.708 "c2h_success": false, 00:21:28.708 "sock_priority": 0, 00:21:28.708 "abort_timeout_sec": 1, 00:21:28.708 "ack_timeout": 0, 00:21:28.708 "data_wr_pool_size": 0 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_create_subsystem", 00:21:28.708 "params": { 00:21:28.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.708 "allow_any_host": false, 00:21:28.708 "serial_number": "SPDK00000000000001", 00:21:28.708 "model_number": "SPDK bdev Controller", 00:21:28.708 "max_namespaces": 10, 00:21:28.708 "min_cntlid": 1, 00:21:28.708 "max_cntlid": 65519, 00:21:28.708 "ana_reporting": false 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_subsystem_add_host", 00:21:28.708 "params": { 00:21:28.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.708 "host": "nqn.2016-06.io.spdk:host1", 00:21:28.708 "psk": "/tmp/tmp.WeQqYigGT9" 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_subsystem_add_ns", 00:21:28.708 "params": { 00:21:28.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.708 "namespace": { 00:21:28.708 "nsid": 1, 00:21:28.708 "bdev_name": "malloc0", 00:21:28.708 "nguid": "D63951D8974844F28C4FD782BC59EFF0", 00:21:28.708 "uuid": "d63951d8-9748-44f2-8c4f-d782bc59eff0", 00:21:28.708 "no_auto_visible": false 00:21:28.708 } 00:21:28.708 } 00:21:28.708 }, 00:21:28.708 { 00:21:28.708 "method": "nvmf_subsystem_add_listener", 00:21:28.708 "params": { 00:21:28.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.708 "listen_address": { 00:21:28.708 "trtype": "TCP", 00:21:28.708 "adrfam": "IPv4", 00:21:28.708 "traddr": "10.0.0.2", 00:21:28.708 "trsvcid": "4420" 00:21:28.708 }, 00:21:28.708 "secure_channel": true 00:21:28.708 } 00:21:28.708 } 00:21:28.708 ] 00:21:28.708 } 00:21:28.708 ] 00:21:28.708 }' 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3589330 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3589330 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3589330 ']' 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.708 11:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.708 [2024-07-15 11:32:57.341297] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:28.709 [2024-07-15 11:32:57.341352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.709 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.970 [2024-07-15 11:32:57.426282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.970 [2024-07-15 11:32:57.479771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.970 [2024-07-15 11:32:57.479802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.970 [2024-07-15 11:32:57.479808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.970 [2024-07-15 11:32:57.479813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.970 [2024-07-15 11:32:57.479817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.970 [2024-07-15 11:32:57.479859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.970 [2024-07-15 11:32:57.663333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.231 [2024-07-15 11:32:57.679309] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:29.231 [2024-07-15 11:32:57.695352] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.231 [2024-07-15 11:32:57.705459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3589381 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3589381 /var/tmp/bdevperf.sock 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3589381 ']' 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.493 11:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:29.493 "subsystems": [ 00:21:29.493 { 00:21:29.493 "subsystem": "keyring", 00:21:29.493 "config": [] 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "subsystem": "iobuf", 00:21:29.493 "config": [ 00:21:29.493 { 00:21:29.493 "method": "iobuf_set_options", 00:21:29.493 "params": { 00:21:29.493 "small_pool_count": 8192, 00:21:29.493 "large_pool_count": 1024, 00:21:29.493 "small_bufsize": 8192, 00:21:29.493 "large_bufsize": 135168 00:21:29.493 } 00:21:29.493 } 00:21:29.493 ] 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "subsystem": "sock", 00:21:29.493 "config": [ 00:21:29.493 { 00:21:29.493 "method": "sock_set_default_impl", 00:21:29.493 "params": { 00:21:29.493 "impl_name": "posix" 00:21:29.493 } 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "method": "sock_impl_set_options", 00:21:29.493 "params": { 00:21:29.493 "impl_name": "ssl", 00:21:29.493 "recv_buf_size": 4096, 00:21:29.493 "send_buf_size": 4096, 00:21:29.493 "enable_recv_pipe": true, 00:21:29.493 "enable_quickack": false, 00:21:29.493 "enable_placement_id": 0, 00:21:29.493 "enable_zerocopy_send_server": true, 00:21:29.493 "enable_zerocopy_send_client": false, 00:21:29.493 "zerocopy_threshold": 0, 00:21:29.493 "tls_version": 0, 00:21:29.493 "enable_ktls": false 00:21:29.493 } 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "method": "sock_impl_set_options", 00:21:29.493 "params": { 00:21:29.493 "impl_name": "posix", 00:21:29.493 "recv_buf_size": 2097152, 00:21:29.493 "send_buf_size": 2097152, 00:21:29.493 "enable_recv_pipe": true, 00:21:29.493 "enable_quickack": false, 00:21:29.493 "enable_placement_id": 0, 00:21:29.493 "enable_zerocopy_send_server": true, 00:21:29.493 "enable_zerocopy_send_client": false, 00:21:29.493 "zerocopy_threshold": 0, 00:21:29.493 "tls_version": 0, 00:21:29.493 "enable_ktls": false 00:21:29.493 } 00:21:29.493 } 00:21:29.493 ] 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "subsystem": "vmd", 00:21:29.493 "config": [] 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "subsystem": "accel", 00:21:29.493 "config": [ 00:21:29.493 { 00:21:29.493 "method": "accel_set_options", 00:21:29.493 "params": { 00:21:29.493 "small_cache_size": 128, 00:21:29.493 "large_cache_size": 16, 00:21:29.493 "task_count": 2048, 00:21:29.493 "sequence_count": 2048, 00:21:29.493 "buf_count": 2048 00:21:29.493 } 00:21:29.493 } 00:21:29.493 ] 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "subsystem": "bdev", 00:21:29.493 "config": [ 00:21:29.493 { 00:21:29.493 "method": "bdev_set_options", 00:21:29.493 "params": { 00:21:29.493 "bdev_io_pool_size": 65535, 00:21:29.493 "bdev_io_cache_size": 256, 00:21:29.493 "bdev_auto_examine": true, 00:21:29.493 "iobuf_small_cache_size": 128, 00:21:29.493 "iobuf_large_cache_size": 16 00:21:29.493 } 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "method": "bdev_raid_set_options", 00:21:29.493 "params": { 00:21:29.493 "process_window_size_kb": 1024 00:21:29.493 } 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "method": "bdev_iscsi_set_options", 00:21:29.493 "params": { 00:21:29.493 "timeout_sec": 30 00:21:29.493 } 00:21:29.493 }, 00:21:29.493 { 00:21:29.493 "method": "bdev_nvme_set_options", 00:21:29.493 "params": { 00:21:29.493 "action_on_timeout": "none", 00:21:29.493 "timeout_us": 0, 00:21:29.493 "timeout_admin_us": 0, 00:21:29.493 "keep_alive_timeout_ms": 10000, 00:21:29.493 "arbitration_burst": 0, 00:21:29.493 "low_priority_weight": 0, 00:21:29.493 "medium_priority_weight": 0, 00:21:29.493 "high_priority_weight": 0, 00:21:29.493 "nvme_adminq_poll_period_us": 10000, 00:21:29.493 "nvme_ioq_poll_period_us": 0, 00:21:29.493 "io_queue_requests": 512, 00:21:29.493 "delay_cmd_submit": true, 00:21:29.493 "transport_retry_count": 4, 00:21:29.493 "bdev_retry_count": 3, 00:21:29.493 "transport_ack_timeout": 0, 00:21:29.493 "ctrlr_loss_timeout_sec": 0, 00:21:29.493 "reconnect_delay_sec": 0, 00:21:29.493 "fast_io_fail_timeout_sec": 0, 00:21:29.493 "disable_auto_failback": false, 00:21:29.493 "generate_uuids": false, 00:21:29.493 "transport_tos": 0, 00:21:29.493 "nvme_error_stat": false, 00:21:29.493 "rdma_srq_size": 0, 00:21:29.493 "io_path_stat": false, 00:21:29.493 "allow_accel_sequence": false, 00:21:29.493 "rdma_max_cq_size": 0, 00:21:29.493 "rdma_cm_event_timeout_ms": 0, 00:21:29.493 "dhchap_digests": [ 00:21:29.493 "sha256", 00:21:29.494 "sha384", 00:21:29.494 "sha512" 00:21:29.494 ], 00:21:29.494 "dhchap_dhgroups": [ 00:21:29.494 "null", 00:21:29.494 "ffdhe2048", 00:21:29.494 "ffdhe3072", 00:21:29.494 "ffdhe4096", 00:21:29.494 "ffdhe6144", 00:21:29.494 "ffdhe8192" 00:21:29.494 ] 00:21:29.494 } 00:21:29.494 }, 00:21:29.494 { 00:21:29.494 "method": "bdev_nvme_attach_controller", 00:21:29.494 "params": { 00:21:29.494 "name": "TLSTEST", 00:21:29.494 "trtype": "TCP", 00:21:29.494 "adrfam": "IPv4", 00:21:29.494 "traddr": "10.0.0.2", 00:21:29.494 "trsvcid": "4420", 00:21:29.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.494 "prchk_reftag": false, 00:21:29.494 "prchk_guard": false, 00:21:29.494 "ctrlr_loss_timeout_sec": 0, 00:21:29.494 "reconnect_delay_sec": 0, 00:21:29.494 "fast_io_fail_timeout_sec": 0, 00:21:29.494 "psk": "/tmp/tmp.WeQqYigGT9", 00:21:29.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.494 "hdgst": false, 00:21:29.494 "ddgst": false 00:21:29.494 } 00:21:29.494 }, 00:21:29.494 { 00:21:29.494 "method": "bdev_nvme_set_hotplug", 00:21:29.494 "params": { 00:21:29.494 "period_us": 100000, 00:21:29.494 "enable": false 00:21:29.494 } 00:21:29.494 }, 00:21:29.494 { 00:21:29.494 "method": "bdev_wait_for_examine" 00:21:29.494 } 00:21:29.494 ] 00:21:29.494 }, 00:21:29.494 { 00:21:29.494 "subsystem": "nbd", 00:21:29.494 "config": [] 00:21:29.494 } 00:21:29.494 ] 00:21:29.494 }' 00:21:29.494 [2024-07-15 11:32:58.183671] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:29.494 [2024-07-15 11:32:58.183723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589381 ] 00:21:29.755 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.755 [2024-07-15 11:32:58.233867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.755 [2024-07-15 11:32:58.287140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.755 [2024-07-15 11:32:58.411777] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.755 [2024-07-15 11:32:58.411844] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:30.327 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.327 11:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.327 11:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:30.587 Running I/O for 10 seconds... 00:21:40.639 00:21:40.639 Latency(us) 00:21:40.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.639 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.639 Verification LBA range: start 0x0 length 0x2000 00:21:40.639 TLSTESTn1 : 10.08 1550.38 6.06 0.00 0.00 82286.48 6034.77 143305.39 00:21:40.639 =================================================================================================================== 00:21:40.639 Total : 1550.38 6.06 0.00 0.00 82286.48 6034.77 143305.39 00:21:40.639 0 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3589381 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3589381 ']' 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3589381 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3589381 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3589381' 00:21:40.639 killing process with pid 3589381 00:21:40.639 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3589381 00:21:40.639 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.639 00:21:40.639 Latency(us) 00:21:40.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.639 =================================================================================================================== 00:21:40.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.640 [2024-07-15 11:33:09.222024] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.640 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3589381 00:21:40.640 11:33:09 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3589330 00:21:40.640 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3589330 ']' 00:21:40.640 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3589330 00:21:40.640 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.640 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3589330 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3589330' 00:21:40.900 killing process with pid 3589330 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3589330 00:21:40.900 [2024-07-15 11:33:09.388121] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3589330 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3592047 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3592047 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3592047 ']' 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.900 11:33:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.900 [2024-07-15 11:33:09.572203] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:40.900 [2024-07-15 11:33:09.572256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.161 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.161 [2024-07-15 11:33:09.636766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.161 [2024-07-15 11:33:09.700180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.161 [2024-07-15 11:33:09.700219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.161 [2024-07-15 11:33:09.700227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.161 [2024-07-15 11:33:09.700233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.161 [2024-07-15 11:33:09.700239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.161 [2024-07-15 11:33:09.700259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.WeQqYigGT9 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.WeQqYigGT9 00:21:41.733 11:33:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.994 [2024-07-15 11:33:10.519247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.994 11:33:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:42.255 11:33:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:42.255 [2024-07-15 11:33:10.856085] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.255 [2024-07-15 11:33:10.856296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.255 11:33:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:42.516 malloc0 00:21:42.516 11:33:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WeQqYigGT9 00:21:42.777 [2024-07-15 11:33:11.356135] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3592632 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3592632 /var/tmp/bdevperf.sock 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3592632 ']' 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.777 11:33:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.777 [2024-07-15 11:33:11.421349] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:42.777 [2024-07-15 11:33:11.421401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592632 ] 00:21:42.777 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.038 [2024-07-15 11:33:11.497673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.038 [2024-07-15 11:33:11.550777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.610 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.610 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:43.610 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WeQqYigGT9 00:21:43.871 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:43.871 [2024-07-15 11:33:12.484757] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.871 nvme0n1 00:21:43.871 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.132 Running I/O for 1 seconds... 00:21:45.080 00:21:45.080 Latency(us) 00:21:45.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.080 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.080 Verification LBA range: start 0x0 length 0x2000 00:21:45.080 nvme0n1 : 1.07 2287.79 8.94 0.00 0.00 54291.90 5597.87 71215.79 00:21:45.080 =================================================================================================================== 00:21:45.080 Total : 2287.79 8.94 0.00 0.00 54291.90 5597.87 71215.79 00:21:45.080 0 00:21:45.080 11:33:13 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3592632 00:21:45.080 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3592632 ']' 00:21:45.080 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3592632 00:21:45.080 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:45.080 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.080 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3592632 00:21:45.341 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:45.341 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3592632' 00:21:45.342 killing process with pid 3592632 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3592632 00:21:45.342 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.342 00:21:45.342 Latency(us) 00:21:45.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.342 =================================================================================================================== 00:21:45.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3592632 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3592047 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3592047 ']' 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3592047 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3592047 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3592047' 00:21:45.342 killing process with pid 3592047 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3592047 00:21:45.342 [2024-07-15 11:33:13.960511] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:45.342 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3592047 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3593213 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3593213 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3593213 ']' 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.603 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.603 [2024-07-15 11:33:14.169984] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:45.603 [2024-07-15 11:33:14.170049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.603 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.603 [2024-07-15 11:33:14.234679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.603 [2024-07-15 11:33:14.299985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.603 [2024-07-15 11:33:14.300024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.603 [2024-07-15 11:33:14.300032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.603 [2024-07-15 11:33:14.300038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.603 [2024-07-15 11:33:14.300044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.603 [2024-07-15 11:33:14.300066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.547 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.547 [2024-07-15 11:33:14.978776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.547 malloc0 00:21:46.547 [2024-07-15 11:33:15.005528] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.547 [2024-07-15 11:33:15.005746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3593339 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3593339 /var/tmp/bdevperf.sock 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3593339 ']' 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.547 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.547 [2024-07-15 11:33:15.083977] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:46.547 [2024-07-15 11:33:15.084023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593339 ] 00:21:46.547 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.547 [2024-07-15 11:33:15.158959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.547 [2024-07-15 11:33:15.212436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.488 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.488 11:33:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:47.488 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WeQqYigGT9 00:21:47.488 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:47.488 [2024-07-15 11:33:16.126388] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.748 nvme0n1 00:21:47.748 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.748 Running I/O for 1 seconds... 00:21:48.692 00:21:48.692 Latency(us) 00:21:48.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.692 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:48.692 Verification LBA range: start 0x0 length 0x2000 00:21:48.692 nvme0n1 : 1.03 4125.15 16.11 0.00 0.00 30666.76 5570.56 57234.77 00:21:48.692 =================================================================================================================== 00:21:48.692 Total : 4125.15 16.11 0.00 0.00 30666.76 5570.56 57234.77 00:21:48.692 0 00:21:48.692 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:48.692 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.692 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.954 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.954 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:48.954 "subsystems": [ 00:21:48.954 { 00:21:48.954 "subsystem": "keyring", 00:21:48.954 "config": [ 00:21:48.954 { 00:21:48.954 "method": "keyring_file_add_key", 00:21:48.954 "params": { 00:21:48.954 "name": "key0", 00:21:48.954 "path": "/tmp/tmp.WeQqYigGT9" 00:21:48.954 } 00:21:48.954 } 00:21:48.954 ] 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "subsystem": "iobuf", 00:21:48.954 "config": [ 00:21:48.954 { 00:21:48.954 "method": "iobuf_set_options", 00:21:48.954 "params": { 00:21:48.954 "small_pool_count": 8192, 00:21:48.954 "large_pool_count": 1024, 00:21:48.954 "small_bufsize": 8192, 00:21:48.954 "large_bufsize": 135168 00:21:48.954 } 00:21:48.954 } 00:21:48.954 ] 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "subsystem": "sock", 00:21:48.954 "config": [ 00:21:48.954 { 00:21:48.954 "method": "sock_set_default_impl", 00:21:48.954 "params": { 00:21:48.954 "impl_name": "posix" 00:21:48.954 } 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "method": "sock_impl_set_options", 00:21:48.954 "params": { 00:21:48.954 "impl_name": "ssl", 00:21:48.954 "recv_buf_size": 4096, 00:21:48.954 "send_buf_size": 4096, 00:21:48.954 "enable_recv_pipe": true, 00:21:48.954 "enable_quickack": false, 00:21:48.954 "enable_placement_id": 0, 00:21:48.954 "enable_zerocopy_send_server": true, 00:21:48.954 "enable_zerocopy_send_client": false, 00:21:48.954 "zerocopy_threshold": 0, 00:21:48.954 "tls_version": 0, 00:21:48.954 "enable_ktls": false 00:21:48.954 } 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "method": "sock_impl_set_options", 00:21:48.954 "params": { 00:21:48.954 "impl_name": "posix", 00:21:48.954 "recv_buf_size": 2097152, 00:21:48.954 "send_buf_size": 2097152, 00:21:48.954 "enable_recv_pipe": true, 00:21:48.954 "enable_quickack": false, 00:21:48.954 "enable_placement_id": 0, 00:21:48.954 "enable_zerocopy_send_server": true, 00:21:48.954 "enable_zerocopy_send_client": false, 00:21:48.954 "zerocopy_threshold": 0, 00:21:48.954 "tls_version": 0, 00:21:48.954 "enable_ktls": false 00:21:48.954 } 00:21:48.954 } 00:21:48.954 ] 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "subsystem": "vmd", 00:21:48.954 "config": [] 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "subsystem": "accel", 00:21:48.954 "config": [ 00:21:48.954 { 00:21:48.954 "method": "accel_set_options", 00:21:48.954 "params": { 00:21:48.954 "small_cache_size": 128, 00:21:48.954 "large_cache_size": 16, 00:21:48.954 "task_count": 2048, 00:21:48.954 "sequence_count": 2048, 00:21:48.954 "buf_count": 2048 00:21:48.954 } 00:21:48.954 } 00:21:48.954 ] 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "subsystem": "bdev", 00:21:48.954 "config": [ 00:21:48.954 { 00:21:48.954 "method": "bdev_set_options", 00:21:48.954 "params": { 00:21:48.954 "bdev_io_pool_size": 65535, 00:21:48.954 "bdev_io_cache_size": 256, 00:21:48.954 "bdev_auto_examine": true, 00:21:48.954 "iobuf_small_cache_size": 128, 00:21:48.954 "iobuf_large_cache_size": 16 00:21:48.954 } 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "method": "bdev_raid_set_options", 00:21:48.954 "params": { 00:21:48.954 "process_window_size_kb": 1024 00:21:48.954 } 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "method": "bdev_iscsi_set_options", 00:21:48.954 "params": { 00:21:48.954 "timeout_sec": 30 00:21:48.954 } 00:21:48.954 }, 00:21:48.954 { 00:21:48.954 "method": "bdev_nvme_set_options", 00:21:48.954 "params": { 00:21:48.954 "action_on_timeout": "none", 00:21:48.954 "timeout_us": 0, 00:21:48.954 "timeout_admin_us": 0, 00:21:48.954 "keep_alive_timeout_ms": 10000, 00:21:48.954 "arbitration_burst": 0, 00:21:48.954 "low_priority_weight": 0, 00:21:48.954 "medium_priority_weight": 0, 00:21:48.954 "high_priority_weight": 0, 00:21:48.954 "nvme_adminq_poll_period_us": 10000, 00:21:48.954 "nvme_ioq_poll_period_us": 0, 00:21:48.954 "io_queue_requests": 0, 00:21:48.954 "delay_cmd_submit": true, 00:21:48.954 "transport_retry_count": 4, 00:21:48.954 "bdev_retry_count": 3, 00:21:48.954 "transport_ack_timeout": 0, 00:21:48.954 "ctrlr_loss_timeout_sec": 0, 00:21:48.954 "reconnect_delay_sec": 0, 00:21:48.954 "fast_io_fail_timeout_sec": 0, 00:21:48.954 "disable_auto_failback": false, 00:21:48.954 "generate_uuids": false, 00:21:48.954 "transport_tos": 0, 00:21:48.954 "nvme_error_stat": false, 00:21:48.954 "rdma_srq_size": 0, 00:21:48.954 "io_path_stat": false, 00:21:48.954 "allow_accel_sequence": false, 00:21:48.954 "rdma_max_cq_size": 0, 00:21:48.954 "rdma_cm_event_timeout_ms": 0, 00:21:48.954 "dhchap_digests": [ 00:21:48.954 "sha256", 00:21:48.954 "sha384", 00:21:48.954 "sha512" 00:21:48.955 ], 00:21:48.955 "dhchap_dhgroups": [ 00:21:48.955 "null", 00:21:48.955 "ffdhe2048", 00:21:48.955 "ffdhe3072", 00:21:48.955 "ffdhe4096", 00:21:48.955 "ffdhe6144", 00:21:48.955 "ffdhe8192" 00:21:48.955 ] 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "bdev_nvme_set_hotplug", 00:21:48.955 "params": { 00:21:48.955 "period_us": 100000, 00:21:48.955 "enable": false 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "bdev_malloc_create", 00:21:48.955 "params": { 00:21:48.955 "name": "malloc0", 00:21:48.955 "num_blocks": 8192, 00:21:48.955 "block_size": 4096, 00:21:48.955 "physical_block_size": 4096, 00:21:48.955 "uuid": "69027d37-38c2-4d35-8dbe-a35de1076609", 00:21:48.955 "optimal_io_boundary": 0 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "bdev_wait_for_examine" 00:21:48.955 } 00:21:48.955 ] 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "subsystem": "nbd", 00:21:48.955 "config": [] 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "subsystem": "scheduler", 00:21:48.955 "config": [ 00:21:48.955 { 00:21:48.955 "method": "framework_set_scheduler", 00:21:48.955 "params": { 00:21:48.955 "name": "static" 00:21:48.955 } 00:21:48.955 } 00:21:48.955 ] 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "subsystem": "nvmf", 00:21:48.955 "config": [ 00:21:48.955 { 00:21:48.955 "method": "nvmf_set_config", 00:21:48.955 "params": { 00:21:48.955 "discovery_filter": "match_any", 00:21:48.955 "admin_cmd_passthru": { 00:21:48.955 "identify_ctrlr": false 00:21:48.955 } 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_set_max_subsystems", 00:21:48.955 "params": { 00:21:48.955 "max_subsystems": 1024 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_set_crdt", 00:21:48.955 "params": { 00:21:48.955 "crdt1": 0, 00:21:48.955 "crdt2": 0, 00:21:48.955 "crdt3": 0 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_create_transport", 00:21:48.955 "params": { 00:21:48.955 "trtype": "TCP", 00:21:48.955 "max_queue_depth": 128, 00:21:48.955 "max_io_qpairs_per_ctrlr": 127, 00:21:48.955 "in_capsule_data_size": 4096, 00:21:48.955 "max_io_size": 131072, 00:21:48.955 "io_unit_size": 131072, 00:21:48.955 "max_aq_depth": 128, 00:21:48.955 "num_shared_buffers": 511, 00:21:48.955 "buf_cache_size": 4294967295, 00:21:48.955 "dif_insert_or_strip": false, 00:21:48.955 "zcopy": false, 00:21:48.955 "c2h_success": false, 00:21:48.955 "sock_priority": 0, 00:21:48.955 "abort_timeout_sec": 1, 00:21:48.955 "ack_timeout": 0, 00:21:48.955 "data_wr_pool_size": 0 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_create_subsystem", 00:21:48.955 "params": { 00:21:48.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.955 "allow_any_host": false, 00:21:48.955 "serial_number": "00000000000000000000", 00:21:48.955 "model_number": "SPDK bdev Controller", 00:21:48.955 "max_namespaces": 32, 00:21:48.955 "min_cntlid": 1, 00:21:48.955 "max_cntlid": 65519, 00:21:48.955 "ana_reporting": false 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_subsystem_add_host", 00:21:48.955 "params": { 00:21:48.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.955 "host": "nqn.2016-06.io.spdk:host1", 00:21:48.955 "psk": "key0" 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_subsystem_add_ns", 00:21:48.955 "params": { 00:21:48.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.955 "namespace": { 00:21:48.955 "nsid": 1, 00:21:48.955 "bdev_name": "malloc0", 00:21:48.955 "nguid": "69027D3738C24D358DBEA35DE1076609", 00:21:48.955 "uuid": "69027d37-38c2-4d35-8dbe-a35de1076609", 00:21:48.955 "no_auto_visible": false 00:21:48.955 } 00:21:48.955 } 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "method": "nvmf_subsystem_add_listener", 00:21:48.955 "params": { 00:21:48.955 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.955 "listen_address": { 00:21:48.955 "trtype": "TCP", 00:21:48.955 "adrfam": "IPv4", 00:21:48.955 "traddr": "10.0.0.2", 00:21:48.955 "trsvcid": "4420" 00:21:48.955 }, 00:21:48.955 "secure_channel": true 00:21:48.955 } 00:21:48.955 } 00:21:48.955 ] 00:21:48.955 } 00:21:48.955 ] 00:21:48.955 }' 00:21:48.955 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:49.217 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:49.217 "subsystems": [ 00:21:49.217 { 00:21:49.217 "subsystem": "keyring", 00:21:49.217 "config": [ 00:21:49.217 { 00:21:49.217 "method": "keyring_file_add_key", 00:21:49.217 "params": { 00:21:49.217 "name": "key0", 00:21:49.217 "path": "/tmp/tmp.WeQqYigGT9" 00:21:49.217 } 00:21:49.217 } 00:21:49.217 ] 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "subsystem": "iobuf", 00:21:49.217 "config": [ 00:21:49.217 { 00:21:49.217 "method": "iobuf_set_options", 00:21:49.217 "params": { 00:21:49.217 "small_pool_count": 8192, 00:21:49.217 "large_pool_count": 1024, 00:21:49.217 "small_bufsize": 8192, 00:21:49.217 "large_bufsize": 135168 00:21:49.217 } 00:21:49.217 } 00:21:49.217 ] 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "subsystem": "sock", 00:21:49.217 "config": [ 00:21:49.217 { 00:21:49.217 "method": "sock_set_default_impl", 00:21:49.217 "params": { 00:21:49.217 "impl_name": "posix" 00:21:49.217 } 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "method": "sock_impl_set_options", 00:21:49.217 "params": { 00:21:49.217 "impl_name": "ssl", 00:21:49.217 "recv_buf_size": 4096, 00:21:49.217 "send_buf_size": 4096, 00:21:49.217 "enable_recv_pipe": true, 00:21:49.217 "enable_quickack": false, 00:21:49.217 "enable_placement_id": 0, 00:21:49.217 "enable_zerocopy_send_server": true, 00:21:49.217 "enable_zerocopy_send_client": false, 00:21:49.217 "zerocopy_threshold": 0, 00:21:49.217 "tls_version": 0, 00:21:49.217 "enable_ktls": false 00:21:49.217 } 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "method": "sock_impl_set_options", 00:21:49.217 "params": { 00:21:49.217 "impl_name": "posix", 00:21:49.217 "recv_buf_size": 2097152, 00:21:49.217 "send_buf_size": 2097152, 00:21:49.217 "enable_recv_pipe": true, 00:21:49.217 "enable_quickack": false, 00:21:49.217 "enable_placement_id": 0, 00:21:49.217 "enable_zerocopy_send_server": true, 00:21:49.217 "enable_zerocopy_send_client": false, 00:21:49.217 "zerocopy_threshold": 0, 00:21:49.217 "tls_version": 0, 00:21:49.217 "enable_ktls": false 00:21:49.217 } 00:21:49.217 } 00:21:49.217 ] 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "subsystem": "vmd", 00:21:49.217 "config": [] 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "subsystem": "accel", 00:21:49.217 "config": [ 00:21:49.217 { 00:21:49.217 "method": "accel_set_options", 00:21:49.217 "params": { 00:21:49.217 "small_cache_size": 128, 00:21:49.217 "large_cache_size": 16, 00:21:49.217 "task_count": 2048, 00:21:49.217 "sequence_count": 2048, 00:21:49.217 "buf_count": 2048 00:21:49.217 } 00:21:49.217 } 00:21:49.217 ] 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "subsystem": "bdev", 00:21:49.217 "config": [ 00:21:49.217 { 00:21:49.217 "method": "bdev_set_options", 00:21:49.217 "params": { 00:21:49.217 "bdev_io_pool_size": 65535, 00:21:49.217 "bdev_io_cache_size": 256, 00:21:49.217 "bdev_auto_examine": true, 00:21:49.217 "iobuf_small_cache_size": 128, 00:21:49.217 "iobuf_large_cache_size": 16 00:21:49.217 } 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "method": "bdev_raid_set_options", 00:21:49.217 "params": { 00:21:49.217 "process_window_size_kb": 1024 00:21:49.217 } 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "method": "bdev_iscsi_set_options", 00:21:49.217 "params": { 00:21:49.217 "timeout_sec": 30 00:21:49.217 } 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "method": "bdev_nvme_set_options", 00:21:49.217 "params": { 00:21:49.217 "action_on_timeout": "none", 00:21:49.217 "timeout_us": 0, 00:21:49.217 "timeout_admin_us": 0, 00:21:49.217 "keep_alive_timeout_ms": 10000, 00:21:49.217 "arbitration_burst": 0, 00:21:49.217 "low_priority_weight": 0, 00:21:49.217 "medium_priority_weight": 0, 00:21:49.217 "high_priority_weight": 0, 00:21:49.217 "nvme_adminq_poll_period_us": 10000, 00:21:49.217 "nvme_ioq_poll_period_us": 0, 00:21:49.217 "io_queue_requests": 512, 00:21:49.217 "delay_cmd_submit": true, 00:21:49.217 "transport_retry_count": 4, 00:21:49.217 "bdev_retry_count": 3, 00:21:49.217 "transport_ack_timeout": 0, 00:21:49.217 "ctrlr_loss_timeout_sec": 0, 00:21:49.217 "reconnect_delay_sec": 0, 00:21:49.217 "fast_io_fail_timeout_sec": 0, 00:21:49.217 "disable_auto_failback": false, 00:21:49.217 "generate_uuids": false, 00:21:49.217 "transport_tos": 0, 00:21:49.217 "nvme_error_stat": false, 00:21:49.217 "rdma_srq_size": 0, 00:21:49.217 "io_path_stat": false, 00:21:49.217 "allow_accel_sequence": false, 00:21:49.217 "rdma_max_cq_size": 0, 00:21:49.217 "rdma_cm_event_timeout_ms": 0, 00:21:49.217 "dhchap_digests": [ 00:21:49.217 "sha256", 00:21:49.217 "sha384", 00:21:49.217 "sha512" 00:21:49.217 ], 00:21:49.217 "dhchap_dhgroups": [ 00:21:49.217 "null", 00:21:49.217 "ffdhe2048", 00:21:49.217 "ffdhe3072", 00:21:49.217 "ffdhe4096", 00:21:49.217 "ffdhe6144", 00:21:49.217 "ffdhe8192" 00:21:49.217 ] 00:21:49.217 } 00:21:49.217 }, 00:21:49.217 { 00:21:49.217 "method": "bdev_nvme_attach_controller", 00:21:49.217 "params": { 00:21:49.217 "name": "nvme0", 00:21:49.217 "trtype": "TCP", 00:21:49.217 "adrfam": "IPv4", 00:21:49.217 "traddr": "10.0.0.2", 00:21:49.217 "trsvcid": "4420", 00:21:49.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.217 "prchk_reftag": false, 00:21:49.217 "prchk_guard": false, 00:21:49.217 "ctrlr_loss_timeout_sec": 0, 00:21:49.217 "reconnect_delay_sec": 0, 00:21:49.217 "fast_io_fail_timeout_sec": 0, 00:21:49.217 "psk": "key0", 00:21:49.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.217 "hdgst": false, 00:21:49.217 "ddgst": false 00:21:49.218 } 00:21:49.218 }, 00:21:49.218 { 00:21:49.218 "method": "bdev_nvme_set_hotplug", 00:21:49.218 "params": { 00:21:49.218 "period_us": 100000, 00:21:49.218 "enable": false 00:21:49.218 } 00:21:49.218 }, 00:21:49.218 { 00:21:49.218 "method": "bdev_enable_histogram", 00:21:49.218 "params": { 00:21:49.218 "name": "nvme0n1", 00:21:49.218 "enable": true 00:21:49.218 } 00:21:49.218 }, 00:21:49.218 { 00:21:49.218 "method": "bdev_wait_for_examine" 00:21:49.218 } 00:21:49.218 ] 00:21:49.218 }, 00:21:49.218 { 00:21:49.218 "subsystem": "nbd", 00:21:49.218 "config": [] 00:21:49.218 } 00:21:49.218 ] 00:21:49.218 }' 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3593339 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3593339 ']' 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3593339 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3593339 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3593339' 00:21:49.218 killing process with pid 3593339 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3593339 00:21:49.218 Received shutdown signal, test time was about 1.000000 seconds 00:21:49.218 00:21:49.218 Latency(us) 00:21:49.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.218 =================================================================================================================== 00:21:49.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3593339 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3593213 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3593213 ']' 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3593213 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.218 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3593213 00:21:49.479 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:49.479 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:49.479 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3593213' 00:21:49.479 killing process with pid 3593213 00:21:49.479 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3593213 00:21:49.479 11:33:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3593213 00:21:49.479 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:49.479 11:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.479 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.479 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:49.479 "subsystems": [ 00:21:49.479 { 00:21:49.479 "subsystem": "keyring", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "keyring_file_add_key", 00:21:49.479 "params": { 00:21:49.479 "name": "key0", 00:21:49.479 "path": "/tmp/tmp.WeQqYigGT9" 00:21:49.479 } 00:21:49.479 } 00:21:49.479 ] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "iobuf", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "iobuf_set_options", 00:21:49.479 "params": { 00:21:49.479 "small_pool_count": 8192, 00:21:49.479 "large_pool_count": 1024, 00:21:49.479 "small_bufsize": 8192, 00:21:49.479 "large_bufsize": 135168 00:21:49.479 } 00:21:49.479 } 00:21:49.479 ] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "sock", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "sock_set_default_impl", 00:21:49.479 "params": { 00:21:49.479 "impl_name": "posix" 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "sock_impl_set_options", 00:21:49.479 "params": { 00:21:49.479 "impl_name": "ssl", 00:21:49.479 "recv_buf_size": 4096, 00:21:49.479 "send_buf_size": 4096, 00:21:49.479 "enable_recv_pipe": true, 00:21:49.479 "enable_quickack": false, 00:21:49.479 "enable_placement_id": 0, 00:21:49.479 "enable_zerocopy_send_server": true, 00:21:49.479 "enable_zerocopy_send_client": false, 00:21:49.479 "zerocopy_threshold": 0, 00:21:49.479 "tls_version": 0, 00:21:49.479 "enable_ktls": false 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "sock_impl_set_options", 00:21:49.479 "params": { 00:21:49.479 "impl_name": "posix", 00:21:49.479 "recv_buf_size": 2097152, 00:21:49.479 "send_buf_size": 2097152, 00:21:49.479 "enable_recv_pipe": true, 00:21:49.479 "enable_quickack": false, 00:21:49.479 "enable_placement_id": 0, 00:21:49.479 "enable_zerocopy_send_server": true, 00:21:49.479 "enable_zerocopy_send_client": false, 00:21:49.479 "zerocopy_threshold": 0, 00:21:49.479 "tls_version": 0, 00:21:49.479 "enable_ktls": false 00:21:49.479 } 00:21:49.479 } 00:21:49.479 ] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "vmd", 00:21:49.479 "config": [] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "accel", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "accel_set_options", 00:21:49.479 "params": { 00:21:49.479 "small_cache_size": 128, 00:21:49.479 "large_cache_size": 16, 00:21:49.479 "task_count": 2048, 00:21:49.479 "sequence_count": 2048, 00:21:49.479 "buf_count": 2048 00:21:49.479 } 00:21:49.479 } 00:21:49.479 ] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "bdev", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "bdev_set_options", 00:21:49.479 "params": { 00:21:49.479 "bdev_io_pool_size": 65535, 00:21:49.479 "bdev_io_cache_size": 256, 00:21:49.479 "bdev_auto_examine": true, 00:21:49.479 "iobuf_small_cache_size": 128, 00:21:49.479 "iobuf_large_cache_size": 16 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "bdev_raid_set_options", 00:21:49.479 "params": { 00:21:49.479 "process_window_size_kb": 1024 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "bdev_iscsi_set_options", 00:21:49.479 "params": { 00:21:49.479 "timeout_sec": 30 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "bdev_nvme_set_options", 00:21:49.479 "params": { 00:21:49.479 "action_on_timeout": "none", 00:21:49.479 "timeout_us": 0, 00:21:49.479 "timeout_admin_us": 0, 00:21:49.479 "keep_alive_timeout_ms": 10000, 00:21:49.479 "arbitration_burst": 0, 00:21:49.479 "low_priority_weight": 0, 00:21:49.479 "medium_priority_weight": 0, 00:21:49.479 "high_priority_weight": 0, 00:21:49.479 "nvme_adminq_poll_period_us": 10000, 00:21:49.479 "nvme_ioq_poll_period_us": 0, 00:21:49.479 "io_queue_requests": 0, 00:21:49.479 "delay_cmd_submit": true, 00:21:49.479 "transport_retry_count": 4, 00:21:49.479 "bdev_retry_count": 3, 00:21:49.479 "transport_ack_timeout": 0, 00:21:49.479 "ctrlr_loss_timeout_sec": 0, 00:21:49.479 "reconnect_delay_sec": 0, 00:21:49.479 "fast_io_fail_timeout_sec": 0, 00:21:49.479 "disable_auto_failback": false, 00:21:49.479 "generate_uuids": false, 00:21:49.479 "transport_tos": 0, 00:21:49.479 "nvme_error_stat": false, 00:21:49.479 "rdma_srq_size": 0, 00:21:49.479 "io_path_stat": false, 00:21:49.479 "allow_accel_sequence": false, 00:21:49.479 "rdma_max_cq_size": 0, 00:21:49.479 "rdma_cm_event_timeout_ms": 0, 00:21:49.479 "dhchap_digests": [ 00:21:49.479 "sha256", 00:21:49.479 "sha384", 00:21:49.479 "sha512" 00:21:49.479 ], 00:21:49.479 "dhchap_dhgroups": [ 00:21:49.479 "null", 00:21:49.479 "ffdhe2048", 00:21:49.479 "ffdhe3072", 00:21:49.479 "ffdhe4096", 00:21:49.479 "ffdhe6144", 00:21:49.479 "ffdhe8192" 00:21:49.479 ] 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "bdev_nvme_set_hotplug", 00:21:49.479 "params": { 00:21:49.479 "period_us": 100000, 00:21:49.479 "enable": false 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "bdev_malloc_create", 00:21:49.479 "params": { 00:21:49.479 "name": "malloc0", 00:21:49.479 "num_blocks": 8192, 00:21:49.479 "block_size": 4096, 00:21:49.479 "physical_block_size": 4096, 00:21:49.479 "uuid": "69027d37-38c2-4d35-8dbe-a35de1076609", 00:21:49.479 "optimal_io_boundary": 0 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "bdev_wait_for_examine" 00:21:49.479 } 00:21:49.479 ] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "nbd", 00:21:49.479 "config": [] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "scheduler", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "framework_set_scheduler", 00:21:49.479 "params": { 00:21:49.479 "name": "static" 00:21:49.479 } 00:21:49.479 } 00:21:49.479 ] 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "subsystem": "nvmf", 00:21:49.479 "config": [ 00:21:49.479 { 00:21:49.479 "method": "nvmf_set_config", 00:21:49.479 "params": { 00:21:49.479 "discovery_filter": "match_any", 00:21:49.479 "admin_cmd_passthru": { 00:21:49.479 "identify_ctrlr": false 00:21:49.479 } 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "nvmf_set_max_subsystems", 00:21:49.479 "params": { 00:21:49.479 "max_subsystems": 1024 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "nvmf_set_crdt", 00:21:49.479 "params": { 00:21:49.479 "crdt1": 0, 00:21:49.479 "crdt2": 0, 00:21:49.479 "crdt3": 0 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "nvmf_create_transport", 00:21:49.479 "params": { 00:21:49.479 "trtype": "TCP", 00:21:49.479 "max_queue_depth": 128, 00:21:49.479 "max_io_qpairs_per_ctrlr": 127, 00:21:49.479 "in_capsule_data_size": 4096, 00:21:49.479 "max_io_size": 131072, 00:21:49.479 "io_unit_size": 131072, 00:21:49.479 "max_aq_depth": 128, 00:21:49.479 "num_shared_buffers": 511, 00:21:49.479 "buf_cache_size": 4294967295, 00:21:49.479 "dif_insert_or_strip": false, 00:21:49.479 "zcopy": false, 00:21:49.479 "c2h_success": false, 00:21:49.479 "sock_priority": 0, 00:21:49.479 "abort_timeout_sec": 1, 00:21:49.479 "ack_timeout": 0, 00:21:49.479 "data_wr_pool_size": 0 00:21:49.479 } 00:21:49.479 }, 00:21:49.479 { 00:21:49.479 "method": "nvmf_create_subsystem", 00:21:49.479 "params": { 00:21:49.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.479 "allow_any_host": false, 00:21:49.479 "serial_number": "00000000000000000000", 00:21:49.479 "model_number": "SPDK bdev Controller", 00:21:49.479 "max_namespaces": 32, 00:21:49.479 "min_cntlid": 1, 00:21:49.480 "max_cntlid": 65519, 00:21:49.480 "ana_reporting": false 00:21:49.480 } 00:21:49.480 }, 00:21:49.480 { 00:21:49.480 "method": "nvmf_subsystem_add_host", 00:21:49.480 "params": { 00:21:49.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.480 "host": "nqn.2016-06.io.spdk:host1", 00:21:49.480 "psk": "key0" 00:21:49.480 } 00:21:49.480 }, 00:21:49.480 { 00:21:49.480 "method": "nvmf_subsystem_add_ns", 00:21:49.480 "params": { 00:21:49.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.480 "namespace": { 00:21:49.480 "nsid": 1, 00:21:49.480 "bdev_name": "malloc0", 00:21:49.480 "nguid": "69027D3738C24D358DBEA35DE1076609", 00:21:49.480 "uuid": "69027d37-38c2-4d35-8dbe-a35de1076609", 00:21:49.480 "no_auto_visible": false 00:21:49.480 } 00:21:49.480 } 00:21:49.480 }, 00:21:49.480 { 00:21:49.480 "method": "nvmf_subsystem_add_listener", 00:21:49.480 "params": { 00:21:49.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.480 "listen_address": { 00:21:49.480 "trtype": "TCP", 00:21:49.480 "adrfam": "IPv4", 00:21:49.480 "traddr": "10.0.0.2", 00:21:49.480 "trsvcid": "4420" 00:21:49.480 }, 00:21:49.480 "secure_channel": true 00:21:49.480 } 00:21:49.480 } 00:21:49.480 ] 00:21:49.480 } 00:21:49.480 ] 00:21:49.480 }' 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3594021 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3594021 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3594021 ']' 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.480 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.480 [2024-07-15 11:33:18.132965] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:49.480 [2024-07-15 11:33:18.133018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.480 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.740 [2024-07-15 11:33:18.197399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.740 [2024-07-15 11:33:18.261731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.740 [2024-07-15 11:33:18.261768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.740 [2024-07-15 11:33:18.261775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.740 [2024-07-15 11:33:18.261782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.740 [2024-07-15 11:33:18.261788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.740 [2024-07-15 11:33:18.261841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.000 [2024-07-15 11:33:18.458945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.000 [2024-07-15 11:33:18.490954] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.000 [2024-07-15 11:33:18.498440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3594102 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3594102 /var/tmp/bdevperf.sock 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3594102 ']' 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.261 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:50.261 "subsystems": [ 00:21:50.261 { 00:21:50.261 "subsystem": "keyring", 00:21:50.261 "config": [ 00:21:50.261 { 00:21:50.261 "method": "keyring_file_add_key", 00:21:50.261 "params": { 00:21:50.261 "name": "key0", 00:21:50.261 "path": "/tmp/tmp.WeQqYigGT9" 00:21:50.261 } 00:21:50.261 } 00:21:50.261 ] 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "subsystem": "iobuf", 00:21:50.261 "config": [ 00:21:50.261 { 00:21:50.261 "method": "iobuf_set_options", 00:21:50.261 "params": { 00:21:50.261 "small_pool_count": 8192, 00:21:50.261 "large_pool_count": 1024, 00:21:50.261 "small_bufsize": 8192, 00:21:50.261 "large_bufsize": 135168 00:21:50.261 } 00:21:50.261 } 00:21:50.261 ] 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "subsystem": "sock", 00:21:50.261 "config": [ 00:21:50.261 { 00:21:50.261 "method": "sock_set_default_impl", 00:21:50.261 "params": { 00:21:50.261 "impl_name": "posix" 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "sock_impl_set_options", 00:21:50.261 "params": { 00:21:50.261 "impl_name": "ssl", 00:21:50.261 "recv_buf_size": 4096, 00:21:50.261 "send_buf_size": 4096, 00:21:50.261 "enable_recv_pipe": true, 00:21:50.261 "enable_quickack": false, 00:21:50.261 "enable_placement_id": 0, 00:21:50.261 "enable_zerocopy_send_server": true, 00:21:50.261 "enable_zerocopy_send_client": false, 00:21:50.261 "zerocopy_threshold": 0, 00:21:50.261 "tls_version": 0, 00:21:50.261 "enable_ktls": false 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "sock_impl_set_options", 00:21:50.261 "params": { 00:21:50.261 "impl_name": "posix", 00:21:50.261 "recv_buf_size": 2097152, 00:21:50.261 "send_buf_size": 2097152, 00:21:50.261 "enable_recv_pipe": true, 00:21:50.261 "enable_quickack": false, 00:21:50.261 "enable_placement_id": 0, 00:21:50.261 "enable_zerocopy_send_server": true, 00:21:50.261 "enable_zerocopy_send_client": false, 00:21:50.261 "zerocopy_threshold": 0, 00:21:50.261 "tls_version": 0, 00:21:50.261 "enable_ktls": false 00:21:50.261 } 00:21:50.261 } 00:21:50.261 ] 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "subsystem": "vmd", 00:21:50.261 "config": [] 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "subsystem": "accel", 00:21:50.261 "config": [ 00:21:50.261 { 00:21:50.261 "method": "accel_set_options", 00:21:50.261 "params": { 00:21:50.261 "small_cache_size": 128, 00:21:50.261 "large_cache_size": 16, 00:21:50.261 "task_count": 2048, 00:21:50.261 "sequence_count": 2048, 00:21:50.261 "buf_count": 2048 00:21:50.261 } 00:21:50.261 } 00:21:50.261 ] 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "subsystem": "bdev", 00:21:50.261 "config": [ 00:21:50.261 { 00:21:50.261 "method": "bdev_set_options", 00:21:50.261 "params": { 00:21:50.261 "bdev_io_pool_size": 65535, 00:21:50.261 "bdev_io_cache_size": 256, 00:21:50.261 "bdev_auto_examine": true, 00:21:50.261 "iobuf_small_cache_size": 128, 00:21:50.261 "iobuf_large_cache_size": 16 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_raid_set_options", 00:21:50.261 "params": { 00:21:50.261 "process_window_size_kb": 1024 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_iscsi_set_options", 00:21:50.261 "params": { 00:21:50.261 "timeout_sec": 30 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_nvme_set_options", 00:21:50.261 "params": { 00:21:50.261 "action_on_timeout": "none", 00:21:50.261 "timeout_us": 0, 00:21:50.261 "timeout_admin_us": 0, 00:21:50.261 "keep_alive_timeout_ms": 10000, 00:21:50.261 "arbitration_burst": 0, 00:21:50.261 "low_priority_weight": 0, 00:21:50.261 "medium_priority_weight": 0, 00:21:50.261 "high_priority_weight": 0, 00:21:50.261 "nvme_adminq_poll_period_us": 10000, 00:21:50.261 "nvme_ioq_poll_period_us": 0, 00:21:50.261 "io_queue_requests": 512, 00:21:50.261 "delay_cmd_submit": true, 00:21:50.261 "transport_retry_count": 4, 00:21:50.261 "bdev_retry_count": 3, 00:21:50.261 "transport_ack_timeout": 0, 00:21:50.261 "ctrlr_loss_timeout_sec": 0, 00:21:50.261 "reconnect_delay_sec": 0, 00:21:50.261 "fast_io_fail_timeout_sec": 0, 00:21:50.261 "disable_auto_failback": false, 00:21:50.261 "generate_uuids": false, 00:21:50.261 "transport_tos": 0, 00:21:50.261 "nvme_error_stat": false, 00:21:50.261 "rdma_srq_size": 0, 00:21:50.261 "io_path_stat": false, 00:21:50.261 "allow_accel_sequence": false, 00:21:50.261 "rdma_max_cq_size": 0, 00:21:50.261 "rdma_cm_event_timeout_ms": 0, 00:21:50.261 "dhchap_digests": [ 00:21:50.261 "sha256", 00:21:50.261 "sha384", 00:21:50.261 "sha512" 00:21:50.261 ], 00:21:50.261 "dhchap_dhgroups": [ 00:21:50.261 "null", 00:21:50.261 "ffdhe2048", 00:21:50.261 "ffdhe3072", 00:21:50.261 "ffdhe4096", 00:21:50.261 "ffdhe6144", 00:21:50.261 "ffdhe8192" 00:21:50.261 ] 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_nvme_attach_controller", 00:21:50.261 "params": { 00:21:50.261 "name": "nvme0", 00:21:50.261 "trtype": "TCP", 00:21:50.261 "adrfam": "IPv4", 00:21:50.261 "traddr": "10.0.0.2", 00:21:50.261 "trsvcid": "4420", 00:21:50.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.261 "prchk_reftag": false, 00:21:50.261 "prchk_guard": false, 00:21:50.261 "ctrlr_loss_timeout_sec": 0, 00:21:50.261 "reconnect_delay_sec": 0, 00:21:50.261 "fast_io_fail_timeout_sec": 0, 00:21:50.261 "psk": "key0", 00:21:50.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.261 "hdgst": false, 00:21:50.261 "ddgst": false 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_nvme_set_hotplug", 00:21:50.261 "params": { 00:21:50.261 "period_us": 100000, 00:21:50.261 "enable": false 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_enable_histogram", 00:21:50.261 "params": { 00:21:50.261 "name": "nvme0n1", 00:21:50.261 "enable": true 00:21:50.261 } 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "method": "bdev_wait_for_examine" 00:21:50.261 } 00:21:50.261 ] 00:21:50.261 }, 00:21:50.261 { 00:21:50.261 "subsystem": "nbd", 00:21:50.261 "config": [] 00:21:50.261 } 00:21:50.261 ] 00:21:50.261 }' 00:21:50.522 [2024-07-15 11:33:18.985659] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:21:50.522 [2024-07-15 11:33:18.985708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594102 ] 00:21:50.522 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.522 [2024-07-15 11:33:19.059737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.522 [2024-07-15 11:33:19.113363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.782 [2024-07-15 11:33:19.246937] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.353 11:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.353 11:33:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:51.353 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:51.353 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:51.353 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.353 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.353 Running I/O for 1 seconds... 00:21:52.738 00:21:52.738 Latency(us) 00:21:52.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.738 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:52.738 Verification LBA range: start 0x0 length 0x2000 00:21:52.738 nvme0n1 : 1.05 2759.58 10.78 0.00 0.00 45472.62 4751.36 115343.36 00:21:52.738 =================================================================================================================== 00:21:52.738 Total : 2759.58 10.78 0.00 0.00 45472.62 4751.36 115343.36 00:21:52.738 0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:52.738 nvmf_trace.0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3594102 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3594102 ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3594102 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594102 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594102' 00:21:52.738 killing process with pid 3594102 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3594102 00:21:52.738 Received shutdown signal, test time was about 1.000000 seconds 00:21:52.738 00:21:52.738 Latency(us) 00:21:52.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.738 =================================================================================================================== 00:21:52.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3594102 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.738 rmmod nvme_tcp 00:21:52.738 rmmod nvme_fabrics 00:21:52.738 rmmod nvme_keyring 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3594021 ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3594021 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3594021 ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3594021 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.738 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594021 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594021' 00:21:52.999 killing process with pid 3594021 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3594021 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3594021 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.999 11:33:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.577 11:33:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.577 11:33:23 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.6YjmDvFiWj /tmp/tmp.9Tug8djNuY /tmp/tmp.WeQqYigGT9 00:21:55.577 00:21:55.577 real 1m23.769s 00:21:55.577 user 2m2.823s 00:21:55.577 sys 0m29.314s 00:21:55.577 11:33:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.577 11:33:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.577 ************************************ 00:21:55.577 END TEST nvmf_tls 00:21:55.577 ************************************ 00:21:55.577 11:33:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:55.577 11:33:23 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:55.577 11:33:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.577 11:33:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.577 11:33:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.577 ************************************ 00:21:55.577 START TEST nvmf_fips 00:21:55.577 ************************************ 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:55.577 * Looking for test storage... 00:21:55.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:55.577 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:55.578 11:33:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:55.578 Error setting digest 00:21:55.578 0022500B477F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:55.578 0022500B477F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.578 11:33:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:02.230 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.230 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.230 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.230 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.230 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.230 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:02.231 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:02.231 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:02.231 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:02.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.231 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:22:02.492 00:22:02.492 --- 10.0.0.2 ping statistics --- 00:22:02.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.492 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:02.492 00:22:02.492 --- 10.0.0.1 ping statistics --- 00:22:02.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.492 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3598746 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3598746 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3598746 ']' 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.492 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.493 11:33:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:02.493 11:33:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.493 [2024-07-15 11:33:31.076026] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:22:02.493 [2024-07-15 11:33:31.076097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.493 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.493 [2024-07-15 11:33:31.164607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.754 [2024-07-15 11:33:31.259321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.754 [2024-07-15 11:33:31.259370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.754 [2024-07-15 11:33:31.259378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.754 [2024-07-15 11:33:31.259385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.754 [2024-07-15 11:33:31.259392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.754 [2024-07-15 11:33:31.259425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:03.328 11:33:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:03.590 [2024-07-15 11:33:32.032336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.590 [2024-07-15 11:33:32.048331] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.590 [2024-07-15 11:33:32.048605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.590 [2024-07-15 11:33:32.078391] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:03.590 malloc0 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3599095 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3599095 /var/tmp/bdevperf.sock 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3599095 ']' 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:03.590 11:33:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.590 [2024-07-15 11:33:32.163749] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:22:03.590 [2024-07-15 11:33:32.163823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599095 ] 00:22:03.590 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.590 [2024-07-15 11:33:32.218148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.590 [2024-07-15 11:33:32.282559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.532 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.532 11:33:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:04.532 11:33:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.532 [2024-07-15 11:33:33.086324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.532 [2024-07-15 11:33:33.086386] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.532 TLSTESTn1 00:22:04.532 11:33:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.791 Running I/O for 10 seconds... 00:22:14.788 00:22:14.788 Latency(us) 00:22:14.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.788 Verification LBA range: start 0x0 length 0x2000 00:22:14.788 TLSTESTn1 : 10.06 2716.85 10.61 0.00 0.00 46962.66 4833.28 62040.75 00:22:14.789 =================================================================================================================== 00:22:14.789 Total : 2716.85 10.61 0.00 0.00 46962.66 4833.28 62040.75 00:22:14.789 0 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:14.789 nvmf_trace.0 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3599095 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3599095 ']' 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3599095 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.789 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3599095 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3599095' 00:22:15.049 killing process with pid 3599095 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3599095 00:22:15.049 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.049 00:22:15.049 Latency(us) 00:22:15.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.049 =================================================================================================================== 00:22:15.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.049 [2024-07-15 11:33:43.529709] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3599095 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.049 rmmod nvme_tcp 00:22:15.049 rmmod nvme_fabrics 00:22:15.049 rmmod nvme_keyring 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3598746 ']' 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3598746 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3598746 ']' 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3598746 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.049 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3598746 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3598746' 00:22:15.311 killing process with pid 3598746 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3598746 00:22:15.311 [2024-07-15 11:33:43.763080] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3598746 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.311 11:33:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.858 11:33:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.858 11:33:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:17.858 00:22:17.858 real 0m22.185s 00:22:17.858 user 0m23.097s 00:22:17.858 sys 0m9.771s 00:22:17.858 11:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:17.858 11:33:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:17.858 ************************************ 00:22:17.858 END TEST nvmf_fips 00:22:17.858 ************************************ 00:22:17.858 11:33:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:17.858 11:33:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:17.858 11:33:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:17.858 11:33:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:17.858 11:33:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:17.858 11:33:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.858 11:33:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:24.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:24.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:24.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:24.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:24.449 11:33:52 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:24.449 11:33:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:24.449 11:33:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:24.449 11:33:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:24.449 ************************************ 00:22:24.449 START TEST nvmf_perf_adq 00:22:24.449 ************************************ 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:24.449 * Looking for test storage... 00:22:24.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.449 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.450 11:33:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:31.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:31.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:31.090 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:31.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:31.090 11:33:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:33.005 11:34:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:34.918 11:34:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.208 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.208 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.208 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.209 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.209 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:22:40.209 00:22:40.209 --- 10.0.0.2 ping statistics --- 00:22:40.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.209 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.448 ms 00:22:40.209 00:22:40.209 --- 10.0.0.1 ping statistics --- 00:22:40.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.209 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3610814 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3610814 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3610814 ']' 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.209 11:34:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.209 [2024-07-15 11:34:08.701381] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:22:40.209 [2024-07-15 11:34:08.701446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.209 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.209 [2024-07-15 11:34:08.774906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.209 [2024-07-15 11:34:08.851715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.209 [2024-07-15 11:34:08.851757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.209 [2024-07-15 11:34:08.851766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.209 [2024-07-15 11:34:08.851772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.209 [2024-07-15 11:34:08.851778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.209 [2024-07-15 11:34:08.851922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.209 [2024-07-15 11:34:08.852043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.209 [2024-07-15 11:34:08.852413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.209 [2024-07-15 11:34:08.852504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 [2024-07-15 11:34:09.658455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 Malloc1 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.151 [2024-07-15 11:34:09.717829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3611014 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:41.151 11:34:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:41.151 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:43.064 "tick_rate": 2400000000, 00:22:43.064 "poll_groups": [ 00:22:43.064 { 00:22:43.064 "name": "nvmf_tgt_poll_group_000", 00:22:43.064 "admin_qpairs": 1, 00:22:43.064 "io_qpairs": 1, 00:22:43.064 "current_admin_qpairs": 1, 00:22:43.064 "current_io_qpairs": 1, 00:22:43.064 "pending_bdev_io": 0, 00:22:43.064 "completed_nvme_io": 19928, 00:22:43.064 "transports": [ 00:22:43.064 { 00:22:43.064 "trtype": "TCP" 00:22:43.064 } 00:22:43.064 ] 00:22:43.064 }, 00:22:43.064 { 00:22:43.064 "name": "nvmf_tgt_poll_group_001", 00:22:43.064 "admin_qpairs": 0, 00:22:43.064 "io_qpairs": 1, 00:22:43.064 "current_admin_qpairs": 0, 00:22:43.064 "current_io_qpairs": 1, 00:22:43.064 "pending_bdev_io": 0, 00:22:43.064 "completed_nvme_io": 28491, 00:22:43.064 "transports": [ 00:22:43.064 { 00:22:43.064 "trtype": "TCP" 00:22:43.064 } 00:22:43.064 ] 00:22:43.064 }, 00:22:43.064 { 00:22:43.064 "name": "nvmf_tgt_poll_group_002", 00:22:43.064 "admin_qpairs": 0, 00:22:43.064 "io_qpairs": 1, 00:22:43.064 "current_admin_qpairs": 0, 00:22:43.064 "current_io_qpairs": 1, 00:22:43.064 "pending_bdev_io": 0, 00:22:43.064 "completed_nvme_io": 22705, 00:22:43.064 "transports": [ 00:22:43.064 { 00:22:43.064 "trtype": "TCP" 00:22:43.064 } 00:22:43.064 ] 00:22:43.064 }, 00:22:43.064 { 00:22:43.064 "name": "nvmf_tgt_poll_group_003", 00:22:43.064 "admin_qpairs": 0, 00:22:43.064 "io_qpairs": 1, 00:22:43.064 "current_admin_qpairs": 0, 00:22:43.064 "current_io_qpairs": 1, 00:22:43.064 "pending_bdev_io": 0, 00:22:43.064 "completed_nvme_io": 19942, 00:22:43.064 "transports": [ 00:22:43.064 { 00:22:43.064 "trtype": "TCP" 00:22:43.064 } 00:22:43.064 ] 00:22:43.064 } 00:22:43.064 ] 00:22:43.064 }' 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:43.064 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:43.325 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:43.325 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:43.325 11:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3611014 00:22:51.461 Initializing NVMe Controllers 00:22:51.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:51.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:51.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:51.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:51.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:51.461 Initialization complete. Launching workers. 00:22:51.461 ======================================================== 00:22:51.461 Latency(us) 00:22:51.461 Device Information : IOPS MiB/s Average min max 00:22:51.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13146.02 51.35 4868.88 1216.34 9352.28 00:22:51.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14917.51 58.27 4289.69 1343.42 9587.88 00:22:51.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14269.31 55.74 4484.70 1284.43 10667.56 00:22:51.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11162.83 43.60 5732.48 1145.53 10766.99 00:22:51.461 ======================================================== 00:22:51.461 Total : 53495.67 208.97 4785.10 1145.53 10766.99 00:22:51.461 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.461 rmmod nvme_tcp 00:22:51.461 rmmod nvme_fabrics 00:22:51.461 rmmod nvme_keyring 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3610814 ']' 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3610814 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3610814 ']' 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3610814 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.461 11:34:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3610814 00:22:51.461 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:51.461 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:51.461 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3610814' 00:22:51.461 killing process with pid 3610814 00:22:51.461 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3610814 00:22:51.461 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3610814 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.722 11:34:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.633 11:34:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.633 11:34:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:53.633 11:34:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:55.547 11:34:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:57.022 11:34:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.321 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:02.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:02.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:02.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:02.322 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:02.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:23:02.322 00:23:02.322 --- 10.0.0.2 ping statistics --- 00:23:02.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.322 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:23:02.322 11:34:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:23:02.322 00:23:02.322 --- 10.0.0.1 ping statistics --- 00:23:02.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.322 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.322 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:02.584 net.core.busy_poll = 1 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:02.584 net.core.busy_read = 1 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:02.584 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:02.585 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3615686 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3615686 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3615686 ']' 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.846 11:34:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.846 [2024-07-15 11:34:31.401155] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:02.846 [2024-07-15 11:34:31.401220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.846 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.846 [2024-07-15 11:34:31.475080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.108 [2024-07-15 11:34:31.554031] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.108 [2024-07-15 11:34:31.554069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.108 [2024-07-15 11:34:31.554077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.108 [2024-07-15 11:34:31.554083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.108 [2024-07-15 11:34:31.554088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.108 [2024-07-15 11:34:31.554261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.108 [2024-07-15 11:34:31.554448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.108 [2024-07-15 11:34:31.554597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.108 [2024-07-15 11:34:31.554598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 [2024-07-15 11:34:32.353448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.680 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 Malloc1 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.941 [2024-07-15 11:34:32.412855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3615842 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:03.941 11:34:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:03.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:05.857 "tick_rate": 2400000000, 00:23:05.857 "poll_groups": [ 00:23:05.857 { 00:23:05.857 "name": "nvmf_tgt_poll_group_000", 00:23:05.857 "admin_qpairs": 1, 00:23:05.857 "io_qpairs": 2, 00:23:05.857 "current_admin_qpairs": 1, 00:23:05.857 "current_io_qpairs": 2, 00:23:05.857 "pending_bdev_io": 0, 00:23:05.857 "completed_nvme_io": 30247, 00:23:05.857 "transports": [ 00:23:05.857 { 00:23:05.857 "trtype": "TCP" 00:23:05.857 } 00:23:05.857 ] 00:23:05.857 }, 00:23:05.857 { 00:23:05.857 "name": "nvmf_tgt_poll_group_001", 00:23:05.857 "admin_qpairs": 0, 00:23:05.857 "io_qpairs": 2, 00:23:05.857 "current_admin_qpairs": 0, 00:23:05.857 "current_io_qpairs": 2, 00:23:05.857 "pending_bdev_io": 0, 00:23:05.857 "completed_nvme_io": 42198, 00:23:05.857 "transports": [ 00:23:05.857 { 00:23:05.857 "trtype": "TCP" 00:23:05.857 } 00:23:05.857 ] 00:23:05.857 }, 00:23:05.857 { 00:23:05.857 "name": "nvmf_tgt_poll_group_002", 00:23:05.857 "admin_qpairs": 0, 00:23:05.857 "io_qpairs": 0, 00:23:05.857 "current_admin_qpairs": 0, 00:23:05.857 "current_io_qpairs": 0, 00:23:05.857 "pending_bdev_io": 0, 00:23:05.857 "completed_nvme_io": 0, 00:23:05.857 "transports": [ 00:23:05.857 { 00:23:05.857 "trtype": "TCP" 00:23:05.857 } 00:23:05.857 ] 00:23:05.857 }, 00:23:05.857 { 00:23:05.857 "name": "nvmf_tgt_poll_group_003", 00:23:05.857 "admin_qpairs": 0, 00:23:05.857 "io_qpairs": 0, 00:23:05.857 "current_admin_qpairs": 0, 00:23:05.857 "current_io_qpairs": 0, 00:23:05.857 "pending_bdev_io": 0, 00:23:05.857 "completed_nvme_io": 0, 00:23:05.857 "transports": [ 00:23:05.857 { 00:23:05.857 "trtype": "TCP" 00:23:05.857 } 00:23:05.857 ] 00:23:05.857 } 00:23:05.857 ] 00:23:05.857 }' 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:05.857 11:34:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3615842 00:23:13.998 Initializing NVMe Controllers 00:23:13.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:13.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:13.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:13.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:13.998 Initialization complete. Launching workers. 00:23:13.998 ======================================================== 00:23:13.998 Latency(us) 00:23:13.998 Device Information : IOPS MiB/s Average min max 00:23:13.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9062.80 35.40 7062.44 1130.70 50843.38 00:23:13.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9908.90 38.71 6458.88 1344.61 49709.56 00:23:13.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11110.30 43.40 5777.76 1216.37 48848.88 00:23:13.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11238.10 43.90 5694.49 1098.95 51642.50 00:23:13.998 ======================================================== 00:23:13.998 Total : 41320.10 161.41 6200.22 1098.95 51642.50 00:23:13.998 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.998 rmmod nvme_tcp 00:23:13.998 rmmod nvme_fabrics 00:23:13.998 rmmod nvme_keyring 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3615686 ']' 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3615686 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3615686 ']' 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3615686 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3615686 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3615686' 00:23:13.998 killing process with pid 3615686 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3615686 00:23:13.998 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3615686 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.259 11:34:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.556 11:34:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.556 11:34:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:17.556 00:23:17.556 real 0m53.240s 00:23:17.556 user 2m46.743s 00:23:17.556 sys 0m11.919s 00:23:17.556 11:34:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.556 11:34:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.556 ************************************ 00:23:17.556 END TEST nvmf_perf_adq 00:23:17.556 ************************************ 00:23:17.556 11:34:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:17.556 11:34:45 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.556 11:34:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:17.556 11:34:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.556 11:34:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:17.556 ************************************ 00:23:17.556 START TEST nvmf_shutdown 00:23:17.556 ************************************ 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.556 * Looking for test storage... 00:23:17.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.556 11:34:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.557 ************************************ 00:23:17.557 START TEST nvmf_shutdown_tc1 00:23:17.557 ************************************ 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.557 11:34:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.700 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.701 11:34:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:23:25.701 00:23:25.701 --- 10.0.0.2 ping statistics --- 00:23:25.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.701 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:23:25.701 00:23:25.701 --- 10.0.0.1 ping statistics --- 00:23:25.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.701 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3622295 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3622295 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3622295 ']' 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.701 11:34:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.701 [2024-07-15 11:34:53.334242] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:25.701 [2024-07-15 11:34:53.334344] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.701 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.702 [2024-07-15 11:34:53.428670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.702 [2024-07-15 11:34:53.523673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.702 [2024-07-15 11:34:53.523731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.702 [2024-07-15 11:34:53.523739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.702 [2024-07-15 11:34:53.523746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.702 [2024-07-15 11:34:53.523752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.702 [2024-07-15 11:34:53.523894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.702 [2024-07-15 11:34:53.524062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.702 [2024-07-15 11:34:53.524228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.702 [2024-07-15 11:34:53.524228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.702 [2024-07-15 11:34:54.163622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.702 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.702 Malloc1 00:23:25.702 [2024-07-15 11:34:54.267085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.702 Malloc2 00:23:25.702 Malloc3 00:23:25.702 Malloc4 00:23:25.702 Malloc5 00:23:25.964 Malloc6 00:23:25.964 Malloc7 00:23:25.964 Malloc8 00:23:25.964 Malloc9 00:23:25.964 Malloc10 00:23:25.964 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.964 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:25.964 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.964 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3622680 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3622680 /var/tmp/bdevperf.sock 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3622680 ']' 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.288 { 00:23:26.288 "params": { 00:23:26.288 "name": "Nvme$subsystem", 00:23:26.288 "trtype": "$TEST_TRANSPORT", 00:23:26.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.288 "adrfam": "ipv4", 00:23:26.288 "trsvcid": "$NVMF_PORT", 00:23:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.288 "hdgst": ${hdgst:-false}, 00:23:26.288 "ddgst": ${ddgst:-false} 00:23:26.288 }, 00:23:26.288 "method": "bdev_nvme_attach_controller" 00:23:26.288 } 00:23:26.288 EOF 00:23:26.288 )") 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.288 { 00:23:26.288 "params": { 00:23:26.288 "name": "Nvme$subsystem", 00:23:26.288 "trtype": "$TEST_TRANSPORT", 00:23:26.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.288 "adrfam": "ipv4", 00:23:26.288 "trsvcid": "$NVMF_PORT", 00:23:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.288 "hdgst": ${hdgst:-false}, 00:23:26.288 "ddgst": ${ddgst:-false} 00:23:26.288 }, 00:23:26.288 "method": "bdev_nvme_attach_controller" 00:23:26.288 } 00:23:26.288 EOF 00:23:26.288 )") 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.288 { 00:23:26.288 "params": { 00:23:26.288 "name": "Nvme$subsystem", 00:23:26.288 "trtype": "$TEST_TRANSPORT", 00:23:26.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.288 "adrfam": "ipv4", 00:23:26.288 "trsvcid": "$NVMF_PORT", 00:23:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.288 "hdgst": ${hdgst:-false}, 00:23:26.288 "ddgst": ${ddgst:-false} 00:23:26.288 }, 00:23:26.288 "method": "bdev_nvme_attach_controller" 00:23:26.288 } 00:23:26.288 EOF 00:23:26.288 )") 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.288 { 00:23:26.288 "params": { 00:23:26.288 "name": "Nvme$subsystem", 00:23:26.288 "trtype": "$TEST_TRANSPORT", 00:23:26.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.288 "adrfam": "ipv4", 00:23:26.288 "trsvcid": "$NVMF_PORT", 00:23:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.288 "hdgst": ${hdgst:-false}, 00:23:26.288 "ddgst": ${ddgst:-false} 00:23:26.288 }, 00:23:26.288 "method": "bdev_nvme_attach_controller" 00:23:26.288 } 00:23:26.288 EOF 00:23:26.288 )") 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.288 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.288 { 00:23:26.288 "params": { 00:23:26.288 "name": "Nvme$subsystem", 00:23:26.288 "trtype": "$TEST_TRANSPORT", 00:23:26.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.288 "adrfam": "ipv4", 00:23:26.288 "trsvcid": "$NVMF_PORT", 00:23:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.288 "hdgst": ${hdgst:-false}, 00:23:26.288 "ddgst": ${ddgst:-false} 00:23:26.288 }, 00:23:26.288 "method": "bdev_nvme_attach_controller" 00:23:26.288 } 00:23:26.288 EOF 00:23:26.288 )") 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.289 { 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme$subsystem", 00:23:26.289 "trtype": "$TEST_TRANSPORT", 00:23:26.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "$NVMF_PORT", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.289 "hdgst": ${hdgst:-false}, 00:23:26.289 "ddgst": ${ddgst:-false} 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 } 00:23:26.289 EOF 00:23:26.289 )") 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.289 [2024-07-15 11:34:54.722511] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:26.289 [2024-07-15 11:34:54.722565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.289 { 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme$subsystem", 00:23:26.289 "trtype": "$TEST_TRANSPORT", 00:23:26.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "$NVMF_PORT", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.289 "hdgst": ${hdgst:-false}, 00:23:26.289 "ddgst": ${ddgst:-false} 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 } 00:23:26.289 EOF 00:23:26.289 )") 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.289 { 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme$subsystem", 00:23:26.289 "trtype": "$TEST_TRANSPORT", 00:23:26.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "$NVMF_PORT", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.289 "hdgst": ${hdgst:-false}, 00:23:26.289 "ddgst": ${ddgst:-false} 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 } 00:23:26.289 EOF 00:23:26.289 )") 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.289 { 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme$subsystem", 00:23:26.289 "trtype": "$TEST_TRANSPORT", 00:23:26.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "$NVMF_PORT", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.289 "hdgst": ${hdgst:-false}, 00:23:26.289 "ddgst": ${ddgst:-false} 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 } 00:23:26.289 EOF 00:23:26.289 )") 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.289 { 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme$subsystem", 00:23:26.289 "trtype": "$TEST_TRANSPORT", 00:23:26.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "$NVMF_PORT", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.289 "hdgst": ${hdgst:-false}, 00:23:26.289 "ddgst": ${ddgst:-false} 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 } 00:23:26.289 EOF 00:23:26.289 )") 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:26.289 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:26.289 11:34:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme1", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme2", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme3", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme4", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme5", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme6", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme7", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.289 "trsvcid": "4420", 00:23:26.289 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.289 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.289 "hdgst": false, 00:23:26.289 "ddgst": false 00:23:26.289 }, 00:23:26.289 "method": "bdev_nvme_attach_controller" 00:23:26.289 },{ 00:23:26.289 "params": { 00:23:26.289 "name": "Nvme8", 00:23:26.289 "trtype": "tcp", 00:23:26.289 "traddr": "10.0.0.2", 00:23:26.289 "adrfam": "ipv4", 00:23:26.290 "trsvcid": "4420", 00:23:26.290 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.290 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.290 "hdgst": false, 00:23:26.290 "ddgst": false 00:23:26.290 }, 00:23:26.290 "method": "bdev_nvme_attach_controller" 00:23:26.290 },{ 00:23:26.290 "params": { 00:23:26.290 "name": "Nvme9", 00:23:26.290 "trtype": "tcp", 00:23:26.290 "traddr": "10.0.0.2", 00:23:26.290 "adrfam": "ipv4", 00:23:26.290 "trsvcid": "4420", 00:23:26.290 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.290 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.290 "hdgst": false, 00:23:26.290 "ddgst": false 00:23:26.290 }, 00:23:26.290 "method": "bdev_nvme_attach_controller" 00:23:26.290 },{ 00:23:26.290 "params": { 00:23:26.290 "name": "Nvme10", 00:23:26.290 "trtype": "tcp", 00:23:26.290 "traddr": "10.0.0.2", 00:23:26.290 "adrfam": "ipv4", 00:23:26.290 "trsvcid": "4420", 00:23:26.290 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.290 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.290 "hdgst": false, 00:23:26.290 "ddgst": false 00:23:26.290 }, 00:23:26.290 "method": "bdev_nvme_attach_controller" 00:23:26.290 }' 00:23:26.290 [2024-07-15 11:34:54.782552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.290 [2024-07-15 11:34:54.847008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3622680 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:27.671 11:34:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3622680 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3622295 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.613 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.613 { 00:23:28.613 "params": { 00:23:28.613 "name": "Nvme$subsystem", 00:23:28.613 "trtype": "$TEST_TRANSPORT", 00:23:28.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.613 "adrfam": "ipv4", 00:23:28.613 "trsvcid": "$NVMF_PORT", 00:23:28.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.613 "hdgst": ${hdgst:-false}, 00:23:28.613 "ddgst": ${ddgst:-false} 00:23:28.613 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 [2024-07-15 11:34:57.136246] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:28.614 [2024-07-15 11:34:57.136305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623051 ] 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.614 { 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme$subsystem", 00:23:28.614 "trtype": "$TEST_TRANSPORT", 00:23:28.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "$NVMF_PORT", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.614 "hdgst": ${hdgst:-false}, 00:23:28.614 "ddgst": ${ddgst:-false} 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 } 00:23:28.614 EOF 00:23:28.614 )") 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.614 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:28.614 11:34:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme1", 00:23:28.614 "trtype": "tcp", 00:23:28.614 "traddr": "10.0.0.2", 00:23:28.614 "adrfam": "ipv4", 00:23:28.614 "trsvcid": "4420", 00:23:28.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.614 "hdgst": false, 00:23:28.614 "ddgst": false 00:23:28.614 }, 00:23:28.614 "method": "bdev_nvme_attach_controller" 00:23:28.614 },{ 00:23:28.614 "params": { 00:23:28.614 "name": "Nvme2", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme3", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme4", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme5", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme6", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme7", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme8", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme9", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 },{ 00:23:28.615 "params": { 00:23:28.615 "name": "Nvme10", 00:23:28.615 "trtype": "tcp", 00:23:28.615 "traddr": "10.0.0.2", 00:23:28.615 "adrfam": "ipv4", 00:23:28.615 "trsvcid": "4420", 00:23:28.615 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.615 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.615 "hdgst": false, 00:23:28.615 "ddgst": false 00:23:28.615 }, 00:23:28.615 "method": "bdev_nvme_attach_controller" 00:23:28.615 }' 00:23:28.615 [2024-07-15 11:34:57.196881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.615 [2024-07-15 11:34:57.261139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.522 Running I/O for 1 seconds... 00:23:31.461 00:23:31.461 Latency(us) 00:23:31.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.461 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme1n1 : 1.08 178.05 11.13 0.00 0.00 355656.25 25340.59 277872.64 00:23:31.461 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme2n1 : 1.13 227.42 14.21 0.00 0.00 273785.39 22828.37 255153.49 00:23:31.461 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme3n1 : 1.12 228.55 14.28 0.00 0.00 267253.55 19114.67 246415.36 00:23:31.461 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme4n1 : 1.13 282.72 17.67 0.00 0.00 212443.90 12342.61 248162.99 00:23:31.461 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme5n1 : 1.11 240.32 15.02 0.00 0.00 239511.68 10431.15 255153.49 00:23:31.461 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme6n1 : 1.11 229.99 14.37 0.00 0.00 251295.15 20534.61 262144.00 00:23:31.461 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme7n1 : 1.13 287.57 17.97 0.00 0.00 196769.82 4341.76 218453.33 00:23:31.461 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme8n1 : 1.16 220.44 13.78 0.00 0.00 253616.85 25340.59 267386.88 00:23:31.461 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme9n1 : 1.16 274.96 17.19 0.00 0.00 199515.99 13380.27 246415.36 00:23:31.461 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.461 Verification LBA range: start 0x0 length 0x400 00:23:31.461 Nvme10n1 : 1.19 268.78 16.80 0.00 0.00 201153.71 8847.36 267386.88 00:23:31.461 =================================================================================================================== 00:23:31.461 Total : 2438.81 152.43 0.00 0.00 238506.99 4341.76 277872.64 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.461 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.461 rmmod nvme_tcp 00:23:31.461 rmmod nvme_fabrics 00:23:31.722 rmmod nvme_keyring 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3622295 ']' 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3622295 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3622295 ']' 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3622295 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3622295 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3622295' 00:23:31.722 killing process with pid 3622295 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3622295 00:23:31.722 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3622295 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.983 11:35:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.890 00:23:33.890 real 0m16.361s 00:23:33.890 user 0m33.834s 00:23:33.890 sys 0m6.364s 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.890 ************************************ 00:23:33.890 END TEST nvmf_shutdown_tc1 00:23:33.890 ************************************ 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.890 11:35:02 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 ************************************ 00:23:34.151 START TEST nvmf_shutdown_tc2 00:23:34.151 ************************************ 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:34.151 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.152 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.152 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.152 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:34.152 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:34.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:23:34.412 00:23:34.412 --- 10.0.0.2 ping statistics --- 00:23:34.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.412 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:23:34.412 00:23:34.412 --- 10.0.0.1 ping statistics --- 00:23:34.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.412 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3624306 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3624306 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3624306 ']' 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.412 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.413 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.413 11:35:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.413 [2024-07-15 11:35:03.053458] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:34.413 [2024-07-15 11:35:03.053526] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.413 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.674 [2024-07-15 11:35:03.141662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.674 [2024-07-15 11:35:03.203746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.674 [2024-07-15 11:35:03.203780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.674 [2024-07-15 11:35:03.203785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.674 [2024-07-15 11:35:03.203790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.674 [2024-07-15 11:35:03.203794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.674 [2024-07-15 11:35:03.203898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.674 [2024-07-15 11:35:03.204056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.674 [2024-07-15 11:35:03.204212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:34.674 [2024-07-15 11:35:03.204379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.245 [2024-07-15 11:35:03.865687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.245 11:35:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.245 Malloc1 00:23:35.504 [2024-07-15 11:35:03.964376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.504 Malloc2 00:23:35.504 Malloc3 00:23:35.504 Malloc4 00:23:35.504 Malloc5 00:23:35.504 Malloc6 00:23:35.504 Malloc7 00:23:35.765 Malloc8 00:23:35.765 Malloc9 00:23:35.765 Malloc10 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3624553 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3624553 /var/tmp/bdevperf.sock 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3624553 ']' 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.765 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.766 "hdgst": ${hdgst:-false}, 00:23:35.766 "ddgst": ${ddgst:-false} 00:23:35.766 }, 00:23:35.766 "method": "bdev_nvme_attach_controller" 00:23:35.766 } 00:23:35.766 EOF 00:23:35.766 )") 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.766 "hdgst": ${hdgst:-false}, 00:23:35.766 "ddgst": ${ddgst:-false} 00:23:35.766 }, 00:23:35.766 "method": "bdev_nvme_attach_controller" 00:23:35.766 } 00:23:35.766 EOF 00:23:35.766 )") 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.766 "hdgst": ${hdgst:-false}, 00:23:35.766 "ddgst": ${ddgst:-false} 00:23:35.766 }, 00:23:35.766 "method": "bdev_nvme_attach_controller" 00:23:35.766 } 00:23:35.766 EOF 00:23:35.766 )") 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.766 "hdgst": ${hdgst:-false}, 00:23:35.766 "ddgst": ${ddgst:-false} 00:23:35.766 }, 00:23:35.766 "method": "bdev_nvme_attach_controller" 00:23:35.766 } 00:23:35.766 EOF 00:23:35.766 )") 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.766 "hdgst": ${hdgst:-false}, 00:23:35.766 "ddgst": ${ddgst:-false} 00:23:35.766 }, 00:23:35.766 "method": "bdev_nvme_attach_controller" 00:23:35.766 } 00:23:35.766 EOF 00:23:35.766 )") 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.766 "hdgst": ${hdgst:-false}, 00:23:35.766 "ddgst": ${ddgst:-false} 00:23:35.766 }, 00:23:35.766 "method": "bdev_nvme_attach_controller" 00:23:35.766 } 00:23:35.766 EOF 00:23:35.766 )") 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.766 [2024-07-15 11:35:04.411071] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:35.766 [2024-07-15 11:35:04.411129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3624553 ] 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.766 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.766 { 00:23:35.766 "params": { 00:23:35.766 "name": "Nvme$subsystem", 00:23:35.766 "trtype": "$TEST_TRANSPORT", 00:23:35.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.766 "adrfam": "ipv4", 00:23:35.766 "trsvcid": "$NVMF_PORT", 00:23:35.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.767 "hdgst": ${hdgst:-false}, 00:23:35.767 "ddgst": ${ddgst:-false} 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 } 00:23:35.767 EOF 00:23:35.767 )") 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.767 { 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme$subsystem", 00:23:35.767 "trtype": "$TEST_TRANSPORT", 00:23:35.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "$NVMF_PORT", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.767 "hdgst": ${hdgst:-false}, 00:23:35.767 "ddgst": ${ddgst:-false} 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 } 00:23:35.767 EOF 00:23:35.767 )") 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.767 { 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme$subsystem", 00:23:35.767 "trtype": "$TEST_TRANSPORT", 00:23:35.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "$NVMF_PORT", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.767 "hdgst": ${hdgst:-false}, 00:23:35.767 "ddgst": ${ddgst:-false} 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 } 00:23:35.767 EOF 00:23:35.767 )") 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.767 { 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme$subsystem", 00:23:35.767 "trtype": "$TEST_TRANSPORT", 00:23:35.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "$NVMF_PORT", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.767 "hdgst": ${hdgst:-false}, 00:23:35.767 "ddgst": ${ddgst:-false} 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 } 00:23:35.767 EOF 00:23:35.767 )") 00:23:35.767 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:35.767 11:35:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme1", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 },{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme2", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 },{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme3", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 },{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme4", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 },{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme5", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 },{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme6", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.767 }, 00:23:35.767 "method": "bdev_nvme_attach_controller" 00:23:35.767 },{ 00:23:35.767 "params": { 00:23:35.767 "name": "Nvme7", 00:23:35.767 "trtype": "tcp", 00:23:35.767 "traddr": "10.0.0.2", 00:23:35.767 "adrfam": "ipv4", 00:23:35.767 "trsvcid": "4420", 00:23:35.767 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:35.767 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:35.767 "hdgst": false, 00:23:35.767 "ddgst": false 00:23:35.768 }, 00:23:35.768 "method": "bdev_nvme_attach_controller" 00:23:35.768 },{ 00:23:35.768 "params": { 00:23:35.768 "name": "Nvme8", 00:23:35.768 "trtype": "tcp", 00:23:35.768 "traddr": "10.0.0.2", 00:23:35.768 "adrfam": "ipv4", 00:23:35.768 "trsvcid": "4420", 00:23:35.768 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:35.768 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:35.768 "hdgst": false, 00:23:35.768 "ddgst": false 00:23:35.768 }, 00:23:35.768 "method": "bdev_nvme_attach_controller" 00:23:35.768 },{ 00:23:35.768 "params": { 00:23:35.768 "name": "Nvme9", 00:23:35.768 "trtype": "tcp", 00:23:35.768 "traddr": "10.0.0.2", 00:23:35.768 "adrfam": "ipv4", 00:23:35.768 "trsvcid": "4420", 00:23:35.768 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:35.768 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:35.768 "hdgst": false, 00:23:35.768 "ddgst": false 00:23:35.768 }, 00:23:35.768 "method": "bdev_nvme_attach_controller" 00:23:35.768 },{ 00:23:35.768 "params": { 00:23:35.768 "name": "Nvme10", 00:23:35.768 "trtype": "tcp", 00:23:35.768 "traddr": "10.0.0.2", 00:23:35.768 "adrfam": "ipv4", 00:23:35.768 "trsvcid": "4420", 00:23:35.768 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:35.768 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:35.768 "hdgst": false, 00:23:35.768 "ddgst": false 00:23:35.768 }, 00:23:35.768 "method": "bdev_nvme_attach_controller" 00:23:35.768 }' 00:23:36.028 [2024-07-15 11:35:04.471368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.028 [2024-07-15 11:35:04.536353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.409 Running I/O for 10 seconds... 00:23:37.409 11:35:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.409 11:35:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:37.409 11:35:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:37.409 11:35:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.409 11:35:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.669 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.669 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:37.669 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:37.669 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:37.669 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:37.669 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:37.670 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:23:37.943 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3624553 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3624553 ']' 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3624553 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3624553 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3624553' 00:23:38.203 killing process with pid 3624553 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3624553 00:23:38.203 11:35:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3624553 00:23:38.483 Received shutdown signal, test time was about 0.970765 seconds 00:23:38.483 00:23:38.483 Latency(us) 00:23:38.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.483 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme1n1 : 0.97 264.85 16.55 0.00 0.00 237317.44 7755.09 239424.85 00:23:38.483 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme2n1 : 0.97 265.14 16.57 0.00 0.00 233840.85 22719.15 260396.37 00:23:38.483 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme3n1 : 0.95 269.03 16.81 0.00 0.00 225028.27 19879.25 242920.11 00:23:38.483 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme4n1 : 0.96 267.51 16.72 0.00 0.00 222124.16 19660.80 242920.11 00:23:38.483 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme5n1 : 0.94 204.08 12.75 0.00 0.00 284784.07 20316.16 249910.61 00:23:38.483 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme6n1 : 0.93 205.96 12.87 0.00 0.00 275409.92 21080.75 244667.73 00:23:38.483 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme7n1 : 0.96 266.30 16.64 0.00 0.00 208455.47 22719.15 244667.73 00:23:38.483 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme8n1 : 0.97 263.95 16.50 0.00 0.00 206613.97 23592.96 263891.63 00:23:38.483 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme9n1 : 0.95 201.11 12.57 0.00 0.00 264150.47 22609.92 276125.01 00:23:38.483 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:38.483 Verification LBA range: start 0x0 length 0x400 00:23:38.483 Nvme10n1 : 0.95 202.99 12.69 0.00 0.00 254968.32 20206.93 248162.99 00:23:38.483 =================================================================================================================== 00:23:38.483 Total : 2410.91 150.68 0.00 0.00 238096.08 7755.09 276125.01 00:23:38.483 11:35:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3624306 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.424 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.424 rmmod nvme_tcp 00:23:39.424 rmmod nvme_fabrics 00:23:39.684 rmmod nvme_keyring 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3624306 ']' 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3624306 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3624306 ']' 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3624306 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3624306 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3624306' 00:23:39.684 killing process with pid 3624306 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3624306 00:23:39.684 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3624306 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.945 11:35:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.858 00:23:41.858 real 0m7.893s 00:23:41.858 user 0m23.762s 00:23:41.858 sys 0m1.216s 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:41.858 ************************************ 00:23:41.858 END TEST nvmf_shutdown_tc2 00:23:41.858 ************************************ 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.858 11:35:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.119 ************************************ 00:23:42.119 START TEST nvmf_shutdown_tc3 00:23:42.119 ************************************ 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.119 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.120 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:23:42.381 00:23:42.381 --- 10.0.0.2 ping statistics --- 00:23:42.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.381 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:23:42.381 00:23:42.381 --- 10.0.0.1 ping statistics --- 00:23:42.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.381 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3626000 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3626000 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3626000 ']' 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.381 11:35:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.381 [2024-07-15 11:35:11.048517] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:42.382 [2024-07-15 11:35:11.048587] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.643 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.643 [2024-07-15 11:35:11.137217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.643 [2024-07-15 11:35:11.198368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.643 [2024-07-15 11:35:11.198406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.643 [2024-07-15 11:35:11.198412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.643 [2024-07-15 11:35:11.198417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.643 [2024-07-15 11:35:11.198422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.643 [2024-07-15 11:35:11.198526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.643 [2024-07-15 11:35:11.198687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.643 [2024-07-15 11:35:11.198847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.643 [2024-07-15 11:35:11.198849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.274 [2024-07-15 11:35:11.862263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:43.274 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:43.275 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.275 11:35:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.275 Malloc1 00:23:43.275 [2024-07-15 11:35:11.960947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.535 Malloc2 00:23:43.535 Malloc3 00:23:43.535 Malloc4 00:23:43.535 Malloc5 00:23:43.535 Malloc6 00:23:43.535 Malloc7 00:23:43.535 Malloc8 00:23:43.796 Malloc9 00:23:43.796 Malloc10 00:23:43.796 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.796 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3626382 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3626382 /var/tmp/bdevperf.sock 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3626382 ']' 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 [2024-07-15 11:35:12.404193] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:43.797 [2024-07-15 11:35:12.404246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626382 ] 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.797 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.797 { 00:23:43.797 "params": { 00:23:43.797 "name": "Nvme$subsystem", 00:23:43.797 "trtype": "$TEST_TRANSPORT", 00:23:43.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.797 "adrfam": "ipv4", 00:23:43.797 "trsvcid": "$NVMF_PORT", 00:23:43.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.797 "hdgst": ${hdgst:-false}, 00:23:43.797 "ddgst": ${ddgst:-false} 00:23:43.797 }, 00:23:43.797 "method": "bdev_nvme_attach_controller" 00:23:43.797 } 00:23:43.797 EOF 00:23:43.797 )") 00:23:43.798 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.798 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.798 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:43.798 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:43.798 11:35:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme1", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme2", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme3", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme4", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme5", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme6", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme7", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme8", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme9", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 },{ 00:23:43.798 "params": { 00:23:43.798 "name": "Nvme10", 00:23:43.798 "trtype": "tcp", 00:23:43.798 "traddr": "10.0.0.2", 00:23:43.798 "adrfam": "ipv4", 00:23:43.798 "trsvcid": "4420", 00:23:43.798 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:43.798 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:43.798 "hdgst": false, 00:23:43.798 "ddgst": false 00:23:43.798 }, 00:23:43.798 "method": "bdev_nvme_attach_controller" 00:23:43.798 }' 00:23:43.798 [2024-07-15 11:35:12.463916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.058 [2024-07-15 11:35:12.528239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.440 Running I/O for 10 seconds... 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.440 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.441 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.441 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.441 11:35:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.441 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:45.441 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:45.441 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:45.700 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3626000 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3626000 ']' 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3626000 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.961 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3626000 00:23:46.236 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.237 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.237 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3626000' 00:23:46.237 killing process with pid 3626000 00:23:46.237 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3626000 00:23:46.237 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3626000 00:23:46.237 [2024-07-15 11:35:14.670931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.670999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.671306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d86e0 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.237 [2024-07-15 11:35:14.672479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.672707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610240 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.238 [2024-07-15 11:35:14.673973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.673977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.673981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.673987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.673991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.673996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.674062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8b80 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.675472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d9020 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.239 [2024-07-15 11:35:14.676198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d94e0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.676998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.240 [2024-07-15 11:35:14.677133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.677267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99a0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.678996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.241 [2024-07-15 11:35:14.679072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.679130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.242 [2024-07-15 11:35:14.683221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.242 [2024-07-15 11:35:14.683834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.242 [2024-07-15 11:35:14.683841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.683989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.683999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.243 [2024-07-15 11:35:14.684327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:46.243 [2024-07-15 11:35:14.684395] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ef4f0 was disconnected and freed. reset controller. 00:23:46.243 [2024-07-15 11:35:14.684559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192b030 is same with the state(5) to be set 00:23:46.243 [2024-07-15 11:35:14.684655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928890 is same with the state(5) to be set 00:23:46.243 [2024-07-15 11:35:14.684735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.243 [2024-07-15 11:35:14.684786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.243 [2024-07-15 11:35:14.684794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928bd0 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.684831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1966860 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.684922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.684976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.684983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921b80 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.685007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1970170 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.685098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1970990 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.685189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1968210 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.685270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.244 [2024-07-15 11:35:14.685326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c5d0 is same with the state(5) to be set 00:23:46.244 [2024-07-15 11:35:14.685562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.244 [2024-07-15 11:35:14.685820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.244 [2024-07-15 11:35:14.685830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.685988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.685997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.686137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.245 [2024-07-15 11:35:14.686144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.245 [2024-07-15 11:35:14.687398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160f8e0 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.687998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.245 [2024-07-15 11:35:14.688063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.688209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160fd80 is same with the state(5) to be set 00:23:46.246 [2024-07-15 11:35:14.696333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.246 [2024-07-15 11:35:14.696849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.246 [2024-07-15 11:35:14.696858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.696865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.696875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.696883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.696953] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1797dc0 was disconnected and freed. reset controller. 00:23:46.247 [2024-07-15 11:35:14.698320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:46.247 [2024-07-15 11:35:14.698354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928890 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192b030 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928bd0 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.247 [2024-07-15 11:35:14.698454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.698463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.247 [2024-07-15 11:35:14.698471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.698479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.247 [2024-07-15 11:35:14.698486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.698494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.247 [2024-07-15 11:35:14.698502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.698509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19652f0 is same with the state(5) to be set 00:23:46.247 [2024-07-15 11:35:14.698528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966860 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1921b80 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1970170 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1970990 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968210 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.698600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179c5d0 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.700734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:46.247 [2024-07-15 11:35:14.701355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.247 [2024-07-15 11:35:14.701392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928890 with addr=10.0.0.2, port=4420 00:23:46.247 [2024-07-15 11:35:14.701406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928890 is same with the state(5) to be set 00:23:46.247 [2024-07-15 11:35:14.702009] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.247 [2024-07-15 11:35:14.702410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.247 [2024-07-15 11:35:14.702448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1970170 with addr=10.0.0.2, port=4420 00:23:46.247 [2024-07-15 11:35:14.702460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1970170 is same with the state(5) to be set 00:23:46.247 [2024-07-15 11:35:14.702478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928890 (9): Bad file descriptor 00:23:46.247 [2024-07-15 11:35:14.702535] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.247 [2024-07-15 11:35:14.702574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.702986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.702994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.703004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.703012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.703022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.703030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.703040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.703049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.703060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.247 [2024-07-15 11:35:14.703067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.247 [2024-07-15 11:35:14.703077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.248 [2024-07-15 11:35:14.703738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.248 [2024-07-15 11:35:14.703747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18564e0 is same with the state(5) to be set 00:23:46.248 [2024-07-15 11:35:14.703791] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18564e0 was disconnected and freed. reset controller. 00:23:46.248 [2024-07-15 11:35:14.703834] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.248 [2024-07-15 11:35:14.703875] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.248 [2024-07-15 11:35:14.703914] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.248 [2024-07-15 11:35:14.703975] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.248 [2024-07-15 11:35:14.704014] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:46.248 [2024-07-15 11:35:14.704047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1970170 (9): Bad file descriptor 00:23:46.248 [2024-07-15 11:35:14.704059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:46.248 [2024-07-15 11:35:14.704065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:46.248 [2024-07-15 11:35:14.704074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:46.248 [2024-07-15 11:35:14.705385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.248 [2024-07-15 11:35:14.705401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:46.248 [2024-07-15 11:35:14.705425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:46.248 [2024-07-15 11:35:14.705434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:46.248 [2024-07-15 11:35:14.705443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:46.249 [2024-07-15 11:35:14.705499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.249 [2024-07-15 11:35:14.705842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.249 [2024-07-15 11:35:14.705855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1968210 with addr=10.0.0.2, port=4420 00:23:46.249 [2024-07-15 11:35:14.705865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1968210 is same with the state(5) to be set 00:23:46.249 [2024-07-15 11:35:14.706171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968210 (9): Bad file descriptor 00:23:46.249 [2024-07-15 11:35:14.706219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:46.249 [2024-07-15 11:35:14.706228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:46.249 [2024-07-15 11:35:14.706235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:46.249 [2024-07-15 11:35:14.706281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.249 [2024-07-15 11:35:14.708366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19652f0 (9): Bad file descriptor 00:23:46.249 [2024-07-15 11:35:14.708491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.708991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.708998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.709008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.709016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.709026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.709035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.709044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.709062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.249 [2024-07-15 11:35:14.709071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.249 [2024-07-15 11:35:14.709080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.709658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.709668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18559e0 is same with the state(5) to be set 00:23:46.250 [2024-07-15 11:35:14.710943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.710958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.710970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.710981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.710992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.250 [2024-07-15 11:35:14.711147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.250 [2024-07-15 11:35:14.711155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.251 [2024-07-15 11:35:14.711918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.251 [2024-07-15 11:35:14.711926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.711936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.711945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.711956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.711965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.711975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.711983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.711993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.712108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.712117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857970 is same with the state(5) to be set 00:23:46.252 [2024-07-15 11:35:14.713396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.252 [2024-07-15 11:35:14.713982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.252 [2024-07-15 11:35:14.713993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.714568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.714576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1858e90 is same with the state(5) to be set 00:23:46.253 [2024-07-15 11:35:14.715841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.715857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.715871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.715880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.715901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.253 [2024-07-15 11:35:14.715913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.253 [2024-07-15 11:35:14.715922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.715934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.715943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.715953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.715961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.715972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.715979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.715989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.715997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.254 [2024-07-15 11:35:14.716694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.254 [2024-07-15 11:35:14.716705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.716984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.716994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.717002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.717012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.717021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.717029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796930 is same with the state(5) to be set 00:23:46.255 [2024-07-15 11:35:14.718299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.255 [2024-07-15 11:35:14.718728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.255 [2024-07-15 11:35:14.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.718991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.718999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.719447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.256 [2024-07-15 11:35:14.719456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f09a0 is same with the state(5) to be set 00:23:46.256 [2024-07-15 11:35:14.720734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.256 [2024-07-15 11:35:14.720747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.720988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.720998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.257 [2024-07-15 11:35:14.721487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.257 [2024-07-15 11:35:14.721496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.721893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.721902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f31d0 is same with the state(5) to be set 00:23:46.258 [2024-07-15 11:35:14.724054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.258 [2024-07-15 11:35:14.724080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:46.258 [2024-07-15 11:35:14.724089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:46.258 [2024-07-15 11:35:14.724099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:46.258 [2024-07-15 11:35:14.724182] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.258 [2024-07-15 11:35:14.724205] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.258 [2024-07-15 11:35:14.724283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:46.258 [2024-07-15 11:35:14.724295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:46.258 [2024-07-15 11:35:14.724633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.258 [2024-07-15 11:35:14.724649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x179c5d0 with addr=10.0.0.2, port=4420 00:23:46.258 [2024-07-15 11:35:14.724657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c5d0 is same with the state(5) to be set 00:23:46.258 [2024-07-15 11:35:14.725071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.258 [2024-07-15 11:35:14.725082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1970990 with addr=10.0.0.2, port=4420 00:23:46.258 [2024-07-15 11:35:14.725089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1970990 is same with the state(5) to be set 00:23:46.258 [2024-07-15 11:35:14.725411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.258 [2024-07-15 11:35:14.725422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192b030 with addr=10.0.0.2, port=4420 00:23:46.258 [2024-07-15 11:35:14.725430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192b030 is same with the state(5) to be set 00:23:46.258 [2024-07-15 11:35:14.725869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.258 [2024-07-15 11:35:14.725879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1921b80 with addr=10.0.0.2, port=4420 00:23:46.258 [2024-07-15 11:35:14.725887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1921b80 is same with the state(5) to be set 00:23:46.258 [2024-07-15 11:35:14.727216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.258 [2024-07-15 11:35:14.727229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.258 [2024-07-15 11:35:14.727242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.259 [2024-07-15 11:35:14.727878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.259 [2024-07-15 11:35:14.727887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.727896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.727906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.727914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.727924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.727932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.727942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.727950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.727960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.727968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.727978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.727986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.727995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.260 [2024-07-15 11:35:14.728367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.260 [2024-07-15 11:35:14.728375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d20 is same with the state(5) to be set 00:23:46.260 [2024-07-15 11:35:14.730110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:46.260 [2024-07-15 11:35:14.730138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:46.260 [2024-07-15 11:35:14.730147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:46.260 task offset: 24576 on job bdev=Nvme7n1 fails 00:23:46.260 00:23:46.260 Latency(us) 00:23:46.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.260 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme1n1 ended in about 0.93 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme1n1 : 0.93 137.23 8.58 68.62 0.00 307444.34 22500.69 281367.89 00:23:46.260 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme2n1 ended in about 0.93 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme2n1 : 0.93 207.08 12.94 69.03 0.00 224387.20 15947.09 249910.61 00:23:46.260 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme3n1 ended in about 0.94 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme3n1 : 0.94 205.31 12.83 68.44 0.00 221669.97 16820.91 248162.99 00:23:46.260 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme4n1 ended in about 0.94 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme4n1 : 0.94 136.52 8.53 68.26 0.00 290127.64 19223.89 253405.87 00:23:46.260 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme5n1 ended in about 0.94 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme5n1 : 0.94 208.50 13.03 68.08 0.00 210114.30 20206.93 246415.36 00:23:46.260 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme6n1 ended in about 0.92 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme6n1 : 0.92 208.28 13.02 69.43 0.00 204135.04 16711.68 246415.36 00:23:46.260 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme7n1 ended in about 0.92 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme7n1 : 0.92 208.69 13.04 69.56 0.00 198947.41 14199.47 248162.99 00:23:46.260 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme8n1 ended in about 0.94 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme8n1 : 0.94 135.81 8.49 67.91 0.00 266595.27 22063.79 267386.88 00:23:46.260 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme9n1 ended in about 0.95 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme9n1 : 0.95 134.54 8.41 67.27 0.00 263176.82 22500.69 253405.87 00:23:46.260 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.260 Job: Nvme10n1 ended in about 0.94 seconds with error 00:23:46.260 Verification LBA range: start 0x0 length 0x400 00:23:46.260 Nvme10n1 : 0.94 135.46 8.47 67.73 0.00 254778.60 23811.41 253405.87 00:23:46.260 =================================================================================================================== 00:23:46.260 Total : 1717.44 107.34 684.32 0.00 239472.82 14199.47 281367.89 00:23:46.260 [2024-07-15 11:35:14.755049] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:46.260 [2024-07-15 11:35:14.755080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:46.260 [2024-07-15 11:35:14.755587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.261 [2024-07-15 11:35:14.755604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928bd0 with addr=10.0.0.2, port=4420 00:23:46.261 [2024-07-15 11:35:14.755614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928bd0 is same with the state(5) to be set 00:23:46.261 [2024-07-15 11:35:14.756031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.261 [2024-07-15 11:35:14.756041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1966860 with addr=10.0.0.2, port=4420 00:23:46.261 [2024-07-15 11:35:14.756048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1966860 is same with the state(5) to be set 00:23:46.261 [2024-07-15 11:35:14.756061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179c5d0 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.756072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1970990 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.756081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192b030 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.756091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1921b80 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.756625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.261 [2024-07-15 11:35:14.756639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928890 with addr=10.0.0.2, port=4420 00:23:46.261 [2024-07-15 11:35:14.756647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928890 is same with the state(5) to be set 00:23:46.261 [2024-07-15 11:35:14.756911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.261 [2024-07-15 11:35:14.756922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1970170 with addr=10.0.0.2, port=4420 00:23:46.261 [2024-07-15 11:35:14.756929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1970170 is same with the state(5) to be set 00:23:46.261 [2024-07-15 11:35:14.757359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.261 [2024-07-15 11:35:14.757370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1968210 with addr=10.0.0.2, port=4420 00:23:46.261 [2024-07-15 11:35:14.757377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1968210 is same with the state(5) to be set 00:23:46.261 [2024-07-15 11:35:14.757577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.261 [2024-07-15 11:35:14.757587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19652f0 with addr=10.0.0.2, port=4420 00:23:46.261 [2024-07-15 11:35:14.757594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19652f0 is same with the state(5) to be set 00:23:46.261 [2024-07-15 11:35:14.757604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928bd0 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.757613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966860 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.757622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.757629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.757637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.261 [2024-07-15 11:35:14.757649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.757655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.757662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:46.261 [2024-07-15 11:35:14.757673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.757679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.757686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:46.261 [2024-07-15 11:35:14.757696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.757703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.757711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:46.261 [2024-07-15 11:35:14.757737] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.261 [2024-07-15 11:35:14.757750] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.261 [2024-07-15 11:35:14.757760] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.261 [2024-07-15 11:35:14.757771] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.261 [2024-07-15 11:35:14.757782] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.261 [2024-07-15 11:35:14.757793] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:46.261 [2024-07-15 11:35:14.758129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928890 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.758174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1970170 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.758184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1968210 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.758193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19652f0 (9): Bad file descriptor 00:23:46.261 [2024-07-15 11:35:14.758201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.758208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.758215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:46.261 [2024-07-15 11:35:14.758225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.758232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.758238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:46.261 [2024-07-15 11:35:14.758490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.758516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.758523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:46.261 [2024-07-15 11:35:14.758533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.758540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.758547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:46.261 [2024-07-15 11:35:14.758556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.758563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.758570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:46.261 [2024-07-15 11:35:14.758579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:46.261 [2024-07-15 11:35:14.758586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:46.261 [2024-07-15 11:35:14.758593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:46.261 [2024-07-15 11:35:14.758626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.261 [2024-07-15 11:35:14.758647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.521 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:46.521 11:35:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3626382 00:23:47.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3626382) - No such process 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.465 11:35:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.465 rmmod nvme_tcp 00:23:47.465 rmmod nvme_fabrics 00:23:47.465 rmmod nvme_keyring 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.465 11:35:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.006 00:23:50.006 real 0m7.491s 00:23:50.006 user 0m17.586s 00:23:50.006 sys 0m1.193s 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 ************************************ 00:23:50.006 END TEST nvmf_shutdown_tc3 00:23:50.006 ************************************ 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:50.006 00:23:50.006 real 0m32.115s 00:23:50.006 user 1m15.323s 00:23:50.006 sys 0m9.026s 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.006 11:35:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 ************************************ 00:23:50.006 END TEST nvmf_shutdown 00:23:50.006 ************************************ 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.006 11:35:18 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 11:35:18 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 11:35:18 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:50.006 11:35:18 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.006 11:35:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.006 ************************************ 00:23:50.006 START TEST nvmf_multicontroller 00:23:50.006 ************************************ 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:50.006 * Looking for test storage... 00:23:50.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:50.006 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.007 11:35:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.591 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.591 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.592 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.592 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.592 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.592 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.853 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:23:56.854 00:23:56.854 --- 10.0.0.2 ping statistics --- 00:23:56.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.854 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:23:56.854 00:23:56.854 --- 10.0.0.1 ping statistics --- 00:23:56.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.854 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3631111 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3631111 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3631111 ']' 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.854 11:35:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.117 [2024-07-15 11:35:25.587797] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:23:57.117 [2024-07-15 11:35:25.587866] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.117 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.117 [2024-07-15 11:35:25.677933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:57.117 [2024-07-15 11:35:25.773141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.117 [2024-07-15 11:35:25.773194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.117 [2024-07-15 11:35:25.773202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.117 [2024-07-15 11:35:25.773210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.117 [2024-07-15 11:35:25.773216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.117 [2024-07-15 11:35:25.773353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.117 [2024-07-15 11:35:25.773645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.117 [2024-07-15 11:35:25.773646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.687 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.687 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:57.687 11:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.687 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.687 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 [2024-07-15 11:35:26.398956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 Malloc0 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 [2024-07-15 11:35:26.465467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 [2024-07-15 11:35:26.477411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.947 Malloc1 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.947 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3631461 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3631461 /var/tmp/bdevperf.sock 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3631461 ']' 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.948 11:35:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.885 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.885 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:58.885 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:58.885 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.885 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.144 NVMe0n1 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.144 1 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.144 request: 00:23:59.144 { 00:23:59.144 "name": "NVMe0", 00:23:59.144 "trtype": "tcp", 00:23:59.144 "traddr": "10.0.0.2", 00:23:59.144 "adrfam": "ipv4", 00:23:59.144 "trsvcid": "4420", 00:23:59.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.144 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:59.144 "hostaddr": "10.0.0.2", 00:23:59.144 "hostsvcid": "60000", 00:23:59.144 "prchk_reftag": false, 00:23:59.144 "prchk_guard": false, 00:23:59.144 "hdgst": false, 00:23:59.144 "ddgst": false, 00:23:59.144 "method": "bdev_nvme_attach_controller", 00:23:59.144 "req_id": 1 00:23:59.144 } 00:23:59.144 Got JSON-RPC error response 00:23:59.144 response: 00:23:59.144 { 00:23:59.144 "code": -114, 00:23:59.144 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:59.144 } 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.144 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.144 request: 00:23:59.144 { 00:23:59.144 "name": "NVMe0", 00:23:59.144 "trtype": "tcp", 00:23:59.144 "traddr": "10.0.0.2", 00:23:59.144 "adrfam": "ipv4", 00:23:59.144 "trsvcid": "4420", 00:23:59.144 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.144 "hostaddr": "10.0.0.2", 00:23:59.144 "hostsvcid": "60000", 00:23:59.144 "prchk_reftag": false, 00:23:59.144 "prchk_guard": false, 00:23:59.144 "hdgst": false, 00:23:59.144 "ddgst": false, 00:23:59.144 "method": "bdev_nvme_attach_controller", 00:23:59.144 "req_id": 1 00:23:59.144 } 00:23:59.144 Got JSON-RPC error response 00:23:59.144 response: 00:23:59.144 { 00:23:59.144 "code": -114, 00:23:59.145 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:59.145 } 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.145 request: 00:23:59.145 { 00:23:59.145 "name": "NVMe0", 00:23:59.145 "trtype": "tcp", 00:23:59.145 "traddr": "10.0.0.2", 00:23:59.145 "adrfam": "ipv4", 00:23:59.145 "trsvcid": "4420", 00:23:59.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.145 "hostaddr": "10.0.0.2", 00:23:59.145 "hostsvcid": "60000", 00:23:59.145 "prchk_reftag": false, 00:23:59.145 "prchk_guard": false, 00:23:59.145 "hdgst": false, 00:23:59.145 "ddgst": false, 00:23:59.145 "multipath": "disable", 00:23:59.145 "method": "bdev_nvme_attach_controller", 00:23:59.145 "req_id": 1 00:23:59.145 } 00:23:59.145 Got JSON-RPC error response 00:23:59.145 response: 00:23:59.145 { 00:23:59.145 "code": -114, 00:23:59.145 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:59.145 } 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.145 request: 00:23:59.145 { 00:23:59.145 "name": "NVMe0", 00:23:59.145 "trtype": "tcp", 00:23:59.145 "traddr": "10.0.0.2", 00:23:59.145 "adrfam": "ipv4", 00:23:59.145 "trsvcid": "4420", 00:23:59.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.145 "hostaddr": "10.0.0.2", 00:23:59.145 "hostsvcid": "60000", 00:23:59.145 "prchk_reftag": false, 00:23:59.145 "prchk_guard": false, 00:23:59.145 "hdgst": false, 00:23:59.145 "ddgst": false, 00:23:59.145 "multipath": "failover", 00:23:59.145 "method": "bdev_nvme_attach_controller", 00:23:59.145 "req_id": 1 00:23:59.145 } 00:23:59.145 Got JSON-RPC error response 00:23:59.145 response: 00:23:59.145 { 00:23:59.145 "code": -114, 00:23:59.145 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:59.145 } 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.145 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.404 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.404 11:35:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.663 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:59.663 11:35:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.601 0 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3631461 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3631461 ']' 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3631461 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:00.601 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631461 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631461' 00:24:00.861 killing process with pid 3631461 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3631461 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3631461 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:00.861 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:00.861 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:00.861 [2024-07-15 11:35:26.606497] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:00.861 [2024-07-15 11:35:26.606558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631461 ] 00:24:00.861 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.861 [2024-07-15 11:35:26.665278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.861 [2024-07-15 11:35:26.730936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.861 [2024-07-15 11:35:28.148903] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e203627d-2e81-41b3-8676-a3ddcfaed637 already exists 00:24:00.861 [2024-07-15 11:35:28.148934] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e203627d-2e81-41b3-8676-a3ddcfaed637 alias for bdev NVMe1n1 00:24:00.861 [2024-07-15 11:35:28.148942] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:00.861 Running I/O for 1 seconds... 00:24:00.861 00:24:00.861 Latency(us) 00:24:00.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.861 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:00.861 NVMe0n1 : 1.00 27869.81 108.87 0.00 0.00 4582.10 3549.87 16384.00 00:24:00.861 =================================================================================================================== 00:24:00.861 Total : 27869.81 108.87 0.00 0.00 4582.10 3549.87 16384.00 00:24:00.861 Received shutdown signal, test time was about 1.000000 seconds 00:24:00.861 00:24:00.861 Latency(us) 00:24:00.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.862 =================================================================================================================== 00:24:00.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.862 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.862 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.862 rmmod nvme_tcp 00:24:00.862 rmmod nvme_fabrics 00:24:01.125 rmmod nvme_keyring 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3631111 ']' 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3631111 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3631111 ']' 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3631111 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631111 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631111' 00:24:01.125 killing process with pid 3631111 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3631111 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3631111 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.125 11:35:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.707 11:35:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.707 00:24:03.707 real 0m13.634s 00:24:03.707 user 0m17.346s 00:24:03.707 sys 0m6.027s 00:24:03.707 11:35:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.707 11:35:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.707 ************************************ 00:24:03.707 END TEST nvmf_multicontroller 00:24:03.707 ************************************ 00:24:03.707 11:35:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:03.707 11:35:31 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:03.707 11:35:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:03.707 11:35:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.707 11:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:03.707 ************************************ 00:24:03.707 START TEST nvmf_aer 00:24:03.707 ************************************ 00:24:03.707 11:35:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:03.707 * Looking for test storage... 00:24:03.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.708 11:35:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:10.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:10.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:10.293 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:10.293 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.293 11:35:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:24:10.554 00:24:10.554 --- 10.0.0.2 ping statistics --- 00:24:10.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.554 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:24:10.554 00:24:10.554 --- 10.0.0.1 ping statistics --- 00:24:10.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.554 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3636149 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3636149 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3636149 ']' 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.554 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.554 [2024-07-15 11:35:39.153332] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:10.554 [2024-07-15 11:35:39.153396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.554 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.554 [2024-07-15 11:35:39.225319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.814 [2024-07-15 11:35:39.300679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.814 [2024-07-15 11:35:39.300717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.814 [2024-07-15 11:35:39.300725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.814 [2024-07-15 11:35:39.300731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.814 [2024-07-15 11:35:39.300737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.814 [2024-07-15 11:35:39.300875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.814 [2024-07-15 11:35:39.300989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.814 [2024-07-15 11:35:39.301162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.814 [2024-07-15 11:35:39.301180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 [2024-07-15 11:35:39.986747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.384 11:35:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 Malloc0 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 [2024-07-15 11:35:40.047115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.384 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.384 [ 00:24:11.384 { 00:24:11.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.384 "subtype": "Discovery", 00:24:11.384 "listen_addresses": [], 00:24:11.384 "allow_any_host": true, 00:24:11.384 "hosts": [] 00:24:11.384 }, 00:24:11.384 { 00:24:11.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.384 "subtype": "NVMe", 00:24:11.385 "listen_addresses": [ 00:24:11.385 { 00:24:11.385 "trtype": "TCP", 00:24:11.385 "adrfam": "IPv4", 00:24:11.385 "traddr": "10.0.0.2", 00:24:11.385 "trsvcid": "4420" 00:24:11.385 } 00:24:11.385 ], 00:24:11.385 "allow_any_host": true, 00:24:11.385 "hosts": [], 00:24:11.385 "serial_number": "SPDK00000000000001", 00:24:11.385 "model_number": "SPDK bdev Controller", 00:24:11.385 "max_namespaces": 2, 00:24:11.385 "min_cntlid": 1, 00:24:11.385 "max_cntlid": 65519, 00:24:11.385 "namespaces": [ 00:24:11.385 { 00:24:11.385 "nsid": 1, 00:24:11.385 "bdev_name": "Malloc0", 00:24:11.385 "name": "Malloc0", 00:24:11.385 "nguid": "715B82C179E647A98ABAB5D6CBB3FEB4", 00:24:11.385 "uuid": "715b82c1-79e6-47a9-8aba-b5d6cbb3feb4" 00:24:11.385 } 00:24:11.385 ] 00:24:11.385 } 00:24:11.385 ] 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3636271 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:11.385 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.645 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.645 Malloc1 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.645 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.645 Asynchronous Event Request test 00:24:11.645 Attaching to 10.0.0.2 00:24:11.645 Attached to 10.0.0.2 00:24:11.645 Registering asynchronous event callbacks... 00:24:11.645 Starting namespace attribute notice tests for all controllers... 00:24:11.645 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:11.645 aer_cb - Changed Namespace 00:24:11.645 Cleaning up... 00:24:11.906 [ 00:24:11.906 { 00:24:11.906 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.906 "subtype": "Discovery", 00:24:11.906 "listen_addresses": [], 00:24:11.906 "allow_any_host": true, 00:24:11.906 "hosts": [] 00:24:11.906 }, 00:24:11.906 { 00:24:11.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.906 "subtype": "NVMe", 00:24:11.906 "listen_addresses": [ 00:24:11.906 { 00:24:11.906 "trtype": "TCP", 00:24:11.906 "adrfam": "IPv4", 00:24:11.906 "traddr": "10.0.0.2", 00:24:11.906 "trsvcid": "4420" 00:24:11.906 } 00:24:11.906 ], 00:24:11.906 "allow_any_host": true, 00:24:11.906 "hosts": [], 00:24:11.906 "serial_number": "SPDK00000000000001", 00:24:11.906 "model_number": "SPDK bdev Controller", 00:24:11.906 "max_namespaces": 2, 00:24:11.906 "min_cntlid": 1, 00:24:11.906 "max_cntlid": 65519, 00:24:11.906 "namespaces": [ 00:24:11.906 { 00:24:11.906 "nsid": 1, 00:24:11.906 "bdev_name": "Malloc0", 00:24:11.906 "name": "Malloc0", 00:24:11.906 "nguid": "715B82C179E647A98ABAB5D6CBB3FEB4", 00:24:11.906 "uuid": "715b82c1-79e6-47a9-8aba-b5d6cbb3feb4" 00:24:11.906 }, 00:24:11.906 { 00:24:11.906 "nsid": 2, 00:24:11.906 "bdev_name": "Malloc1", 00:24:11.906 "name": "Malloc1", 00:24:11.906 "nguid": "8427785EBC34472FA813037DC1483F3C", 00:24:11.906 "uuid": "8427785e-bc34-472f-a813-037dc1483f3c" 00:24:11.906 } 00:24:11.906 ] 00:24:11.906 } 00:24:11.906 ] 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3636271 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.906 rmmod nvme_tcp 00:24:11.906 rmmod nvme_fabrics 00:24:11.906 rmmod nvme_keyring 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3636149 ']' 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3636149 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3636149 ']' 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3636149 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636149 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636149' 00:24:11.906 killing process with pid 3636149 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3636149 00:24:11.906 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3636149 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.167 11:35:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.077 11:35:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.077 00:24:14.077 real 0m10.780s 00:24:14.077 user 0m7.481s 00:24:14.077 sys 0m5.655s 00:24:14.077 11:35:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.077 11:35:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:14.077 ************************************ 00:24:14.077 END TEST nvmf_aer 00:24:14.077 ************************************ 00:24:14.338 11:35:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.338 11:35:42 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.338 11:35:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.338 11:35:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.338 11:35:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.338 ************************************ 00:24:14.338 START TEST nvmf_async_init 00:24:14.338 ************************************ 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.338 * Looking for test storage... 00:24:14.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.338 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=502244e5320e4c85a780224b9051f01b 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.339 11:35:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.474 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:22.475 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:22.475 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:22.475 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:22.475 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:22.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:24:22.475 00:24:22.475 --- 10.0.0.2 ping statistics --- 00:24:22.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.475 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:24:22.475 00:24:22.475 --- 10.0.0.1 ping statistics --- 00:24:22.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.475 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.475 11:35:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3640493 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3640493 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3640493 ']' 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.475 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.475 [2024-07-15 11:35:50.073535] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:22.475 [2024-07-15 11:35:50.073606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.475 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.475 [2024-07-15 11:35:50.144740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.475 [2024-07-15 11:35:50.218567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.475 [2024-07-15 11:35:50.218608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.475 [2024-07-15 11:35:50.218615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.475 [2024-07-15 11:35:50.218622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.475 [2024-07-15 11:35:50.218628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.475 [2024-07-15 11:35:50.218647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 [2024-07-15 11:35:50.889570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 null0 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 502244e5320e4c85a780224b9051f01b 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.476 [2024-07-15 11:35:50.949821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.476 11:35:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.736 nvme0n1 00:24:22.736 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.736 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.736 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.736 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.736 [ 00:24:22.736 { 00:24:22.736 "name": "nvme0n1", 00:24:22.736 "aliases": [ 00:24:22.736 "502244e5-320e-4c85-a780-224b9051f01b" 00:24:22.736 ], 00:24:22.736 "product_name": "NVMe disk", 00:24:22.736 "block_size": 512, 00:24:22.736 "num_blocks": 2097152, 00:24:22.736 "uuid": "502244e5-320e-4c85-a780-224b9051f01b", 00:24:22.736 "assigned_rate_limits": { 00:24:22.736 "rw_ios_per_sec": 0, 00:24:22.736 "rw_mbytes_per_sec": 0, 00:24:22.736 "r_mbytes_per_sec": 0, 00:24:22.736 "w_mbytes_per_sec": 0 00:24:22.736 }, 00:24:22.736 "claimed": false, 00:24:22.736 "zoned": false, 00:24:22.736 "supported_io_types": { 00:24:22.736 "read": true, 00:24:22.736 "write": true, 00:24:22.736 "unmap": false, 00:24:22.736 "flush": true, 00:24:22.736 "reset": true, 00:24:22.736 "nvme_admin": true, 00:24:22.736 "nvme_io": true, 00:24:22.736 "nvme_io_md": false, 00:24:22.736 "write_zeroes": true, 00:24:22.736 "zcopy": false, 00:24:22.736 "get_zone_info": false, 00:24:22.736 "zone_management": false, 00:24:22.736 "zone_append": false, 00:24:22.736 "compare": true, 00:24:22.736 "compare_and_write": true, 00:24:22.736 "abort": true, 00:24:22.736 "seek_hole": false, 00:24:22.736 "seek_data": false, 00:24:22.736 "copy": true, 00:24:22.736 "nvme_iov_md": false 00:24:22.736 }, 00:24:22.736 "memory_domains": [ 00:24:22.736 { 00:24:22.736 "dma_device_id": "system", 00:24:22.736 "dma_device_type": 1 00:24:22.736 } 00:24:22.736 ], 00:24:22.736 "driver_specific": { 00:24:22.736 "nvme": [ 00:24:22.737 { 00:24:22.737 "trid": { 00:24:22.737 "trtype": "TCP", 00:24:22.737 "adrfam": "IPv4", 00:24:22.737 "traddr": "10.0.0.2", 00:24:22.737 "trsvcid": "4420", 00:24:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.737 }, 00:24:22.737 "ctrlr_data": { 00:24:22.737 "cntlid": 1, 00:24:22.737 "vendor_id": "0x8086", 00:24:22.737 "model_number": "SPDK bdev Controller", 00:24:22.737 "serial_number": "00000000000000000000", 00:24:22.737 "firmware_revision": "24.09", 00:24:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.737 "oacs": { 00:24:22.737 "security": 0, 00:24:22.737 "format": 0, 00:24:22.737 "firmware": 0, 00:24:22.737 "ns_manage": 0 00:24:22.737 }, 00:24:22.737 "multi_ctrlr": true, 00:24:22.737 "ana_reporting": false 00:24:22.737 }, 00:24:22.737 "vs": { 00:24:22.737 "nvme_version": "1.3" 00:24:22.737 }, 00:24:22.737 "ns_data": { 00:24:22.737 "id": 1, 00:24:22.737 "can_share": true 00:24:22.737 } 00:24:22.737 } 00:24:22.737 ], 00:24:22.737 "mp_policy": "active_passive" 00:24:22.737 } 00:24:22.737 } 00:24:22.737 ] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.737 [2024-07-15 11:35:51.223836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:22.737 [2024-07-15 11:35:51.223903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbcdf0 (9): Bad file descriptor 00:24:22.737 [2024-07-15 11:35:51.356224] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.737 [ 00:24:22.737 { 00:24:22.737 "name": "nvme0n1", 00:24:22.737 "aliases": [ 00:24:22.737 "502244e5-320e-4c85-a780-224b9051f01b" 00:24:22.737 ], 00:24:22.737 "product_name": "NVMe disk", 00:24:22.737 "block_size": 512, 00:24:22.737 "num_blocks": 2097152, 00:24:22.737 "uuid": "502244e5-320e-4c85-a780-224b9051f01b", 00:24:22.737 "assigned_rate_limits": { 00:24:22.737 "rw_ios_per_sec": 0, 00:24:22.737 "rw_mbytes_per_sec": 0, 00:24:22.737 "r_mbytes_per_sec": 0, 00:24:22.737 "w_mbytes_per_sec": 0 00:24:22.737 }, 00:24:22.737 "claimed": false, 00:24:22.737 "zoned": false, 00:24:22.737 "supported_io_types": { 00:24:22.737 "read": true, 00:24:22.737 "write": true, 00:24:22.737 "unmap": false, 00:24:22.737 "flush": true, 00:24:22.737 "reset": true, 00:24:22.737 "nvme_admin": true, 00:24:22.737 "nvme_io": true, 00:24:22.737 "nvme_io_md": false, 00:24:22.737 "write_zeroes": true, 00:24:22.737 "zcopy": false, 00:24:22.737 "get_zone_info": false, 00:24:22.737 "zone_management": false, 00:24:22.737 "zone_append": false, 00:24:22.737 "compare": true, 00:24:22.737 "compare_and_write": true, 00:24:22.737 "abort": true, 00:24:22.737 "seek_hole": false, 00:24:22.737 "seek_data": false, 00:24:22.737 "copy": true, 00:24:22.737 "nvme_iov_md": false 00:24:22.737 }, 00:24:22.737 "memory_domains": [ 00:24:22.737 { 00:24:22.737 "dma_device_id": "system", 00:24:22.737 "dma_device_type": 1 00:24:22.737 } 00:24:22.737 ], 00:24:22.737 "driver_specific": { 00:24:22.737 "nvme": [ 00:24:22.737 { 00:24:22.737 "trid": { 00:24:22.737 "trtype": "TCP", 00:24:22.737 "adrfam": "IPv4", 00:24:22.737 "traddr": "10.0.0.2", 00:24:22.737 "trsvcid": "4420", 00:24:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.737 }, 00:24:22.737 "ctrlr_data": { 00:24:22.737 "cntlid": 2, 00:24:22.737 "vendor_id": "0x8086", 00:24:22.737 "model_number": "SPDK bdev Controller", 00:24:22.737 "serial_number": "00000000000000000000", 00:24:22.737 "firmware_revision": "24.09", 00:24:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.737 "oacs": { 00:24:22.737 "security": 0, 00:24:22.737 "format": 0, 00:24:22.737 "firmware": 0, 00:24:22.737 "ns_manage": 0 00:24:22.737 }, 00:24:22.737 "multi_ctrlr": true, 00:24:22.737 "ana_reporting": false 00:24:22.737 }, 00:24:22.737 "vs": { 00:24:22.737 "nvme_version": "1.3" 00:24:22.737 }, 00:24:22.737 "ns_data": { 00:24:22.737 "id": 1, 00:24:22.737 "can_share": true 00:24:22.737 } 00:24:22.737 } 00:24:22.737 ], 00:24:22.737 "mp_policy": "active_passive" 00:24:22.737 } 00:24:22.737 } 00:24:22.737 ] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.I7yrgn3eoI 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.I7yrgn3eoI 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.737 [2024-07-15 11:35:51.428465] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.737 [2024-07-15 11:35:51.428577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7yrgn3eoI 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.737 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.997 [2024-07-15 11:35:51.440492] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.I7yrgn3eoI 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.997 [2024-07-15 11:35:51.452541] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.997 [2024-07-15 11:35:51.452580] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:22.997 nvme0n1 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.997 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.997 [ 00:24:22.997 { 00:24:22.997 "name": "nvme0n1", 00:24:22.997 "aliases": [ 00:24:22.997 "502244e5-320e-4c85-a780-224b9051f01b" 00:24:22.997 ], 00:24:22.997 "product_name": "NVMe disk", 00:24:22.997 "block_size": 512, 00:24:22.997 "num_blocks": 2097152, 00:24:22.997 "uuid": "502244e5-320e-4c85-a780-224b9051f01b", 00:24:22.997 "assigned_rate_limits": { 00:24:22.997 "rw_ios_per_sec": 0, 00:24:22.997 "rw_mbytes_per_sec": 0, 00:24:22.997 "r_mbytes_per_sec": 0, 00:24:22.997 "w_mbytes_per_sec": 0 00:24:22.997 }, 00:24:22.997 "claimed": false, 00:24:22.997 "zoned": false, 00:24:22.997 "supported_io_types": { 00:24:22.997 "read": true, 00:24:22.997 "write": true, 00:24:22.997 "unmap": false, 00:24:22.997 "flush": true, 00:24:22.997 "reset": true, 00:24:22.997 "nvme_admin": true, 00:24:22.997 "nvme_io": true, 00:24:22.997 "nvme_io_md": false, 00:24:22.997 "write_zeroes": true, 00:24:22.997 "zcopy": false, 00:24:22.997 "get_zone_info": false, 00:24:22.997 "zone_management": false, 00:24:22.997 "zone_append": false, 00:24:22.997 "compare": true, 00:24:22.997 "compare_and_write": true, 00:24:22.997 "abort": true, 00:24:22.997 "seek_hole": false, 00:24:22.997 "seek_data": false, 00:24:22.997 "copy": true, 00:24:22.997 "nvme_iov_md": false 00:24:22.997 }, 00:24:22.997 "memory_domains": [ 00:24:22.997 { 00:24:22.997 "dma_device_id": "system", 00:24:22.997 "dma_device_type": 1 00:24:22.997 } 00:24:22.997 ], 00:24:22.997 "driver_specific": { 00:24:22.997 "nvme": [ 00:24:22.997 { 00:24:22.997 "trid": { 00:24:22.997 "trtype": "TCP", 00:24:22.997 "adrfam": "IPv4", 00:24:22.997 "traddr": "10.0.0.2", 00:24:22.997 "trsvcid": "4421", 00:24:22.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:22.997 }, 00:24:22.997 "ctrlr_data": { 00:24:22.997 "cntlid": 3, 00:24:22.997 "vendor_id": "0x8086", 00:24:22.997 "model_number": "SPDK bdev Controller", 00:24:22.997 "serial_number": "00000000000000000000", 00:24:22.997 "firmware_revision": "24.09", 00:24:22.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:22.997 "oacs": { 00:24:22.997 "security": 0, 00:24:22.997 "format": 0, 00:24:22.997 "firmware": 0, 00:24:22.998 "ns_manage": 0 00:24:22.998 }, 00:24:22.998 "multi_ctrlr": true, 00:24:22.998 "ana_reporting": false 00:24:22.998 }, 00:24:22.998 "vs": { 00:24:22.998 "nvme_version": "1.3" 00:24:22.998 }, 00:24:22.998 "ns_data": { 00:24:22.998 "id": 1, 00:24:22.998 "can_share": true 00:24:22.998 } 00:24:22.998 } 00:24:22.998 ], 00:24:22.998 "mp_policy": "active_passive" 00:24:22.998 } 00:24:22.998 } 00:24:22.998 ] 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.I7yrgn3eoI 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.998 rmmod nvme_tcp 00:24:22.998 rmmod nvme_fabrics 00:24:22.998 rmmod nvme_keyring 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3640493 ']' 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3640493 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3640493 ']' 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3640493 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.998 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640493 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640493' 00:24:23.257 killing process with pid 3640493 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3640493 00:24:23.257 [2024-07-15 11:35:51.706137] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:23.257 [2024-07-15 11:35:51.706164] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3640493 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.257 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.258 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.258 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.258 11:35:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.258 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.258 11:35:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.807 11:35:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.807 00:24:25.807 real 0m11.077s 00:24:25.807 user 0m3.941s 00:24:25.807 sys 0m5.602s 00:24:25.807 11:35:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.807 11:35:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.807 ************************************ 00:24:25.807 END TEST nvmf_async_init 00:24:25.807 ************************************ 00:24:25.807 11:35:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.807 11:35:53 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:25.807 11:35:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.807 11:35:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.807 11:35:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.807 ************************************ 00:24:25.807 START TEST dma 00:24:25.807 ************************************ 00:24:25.807 11:35:53 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:25.807 * Looking for test storage... 00:24:25.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.807 11:35:54 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.807 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.808 11:35:54 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.808 11:35:54 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.808 11:35:54 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.808 11:35:54 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:25.808 11:35:54 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.808 11:35:54 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.808 11:35:54 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:25.808 11:35:54 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:25.808 00:24:25.808 real 0m0.131s 00:24:25.808 user 0m0.062s 00:24:25.808 sys 0m0.077s 00:24:25.808 11:35:54 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.808 11:35:54 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:25.808 ************************************ 00:24:25.808 END TEST dma 00:24:25.808 ************************************ 00:24:25.808 11:35:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.808 11:35:54 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:25.808 11:35:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.808 11:35:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.808 11:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.808 ************************************ 00:24:25.808 START TEST nvmf_identify 00:24:25.808 ************************************ 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:25.808 * Looking for test storage... 00:24:25.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.808 11:35:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.431 11:36:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:32.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:32.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:32.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:32.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.431 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.695 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.695 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.695 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.695 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.695 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.695 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:24:32.696 00:24:32.696 --- 10.0.0.2 ping statistics --- 00:24:32.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.696 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:24:32.696 00:24:32.696 --- 10.0.0.1 ping statistics --- 00:24:32.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.696 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3644971 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3644971 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3644971 ']' 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.696 11:36:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.956 [2024-07-15 11:36:01.416399] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:32.956 [2024-07-15 11:36:01.416469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.956 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.956 [2024-07-15 11:36:01.491376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.956 [2024-07-15 11:36:01.572828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.956 [2024-07-15 11:36:01.572873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.956 [2024-07-15 11:36:01.572881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.957 [2024-07-15 11:36:01.572888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.957 [2024-07-15 11:36:01.572894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.957 [2024-07-15 11:36:01.573041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.957 [2024-07-15 11:36:01.573189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.957 [2024-07-15 11:36:01.573248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.957 [2024-07-15 11:36:01.573250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.527 [2024-07-15 11:36:02.215595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:33.527 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 Malloc0 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 [2024-07-15 11:36:02.315043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.790 [ 00:24:33.790 { 00:24:33.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:33.790 "subtype": "Discovery", 00:24:33.790 "listen_addresses": [ 00:24:33.790 { 00:24:33.790 "trtype": "TCP", 00:24:33.790 "adrfam": "IPv4", 00:24:33.790 "traddr": "10.0.0.2", 00:24:33.790 "trsvcid": "4420" 00:24:33.790 } 00:24:33.790 ], 00:24:33.790 "allow_any_host": true, 00:24:33.790 "hosts": [] 00:24:33.790 }, 00:24:33.790 { 00:24:33.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.790 "subtype": "NVMe", 00:24:33.790 "listen_addresses": [ 00:24:33.790 { 00:24:33.790 "trtype": "TCP", 00:24:33.790 "adrfam": "IPv4", 00:24:33.790 "traddr": "10.0.0.2", 00:24:33.790 "trsvcid": "4420" 00:24:33.790 } 00:24:33.790 ], 00:24:33.790 "allow_any_host": true, 00:24:33.790 "hosts": [], 00:24:33.790 "serial_number": "SPDK00000000000001", 00:24:33.790 "model_number": "SPDK bdev Controller", 00:24:33.790 "max_namespaces": 32, 00:24:33.790 "min_cntlid": 1, 00:24:33.790 "max_cntlid": 65519, 00:24:33.790 "namespaces": [ 00:24:33.790 { 00:24:33.790 "nsid": 1, 00:24:33.790 "bdev_name": "Malloc0", 00:24:33.790 "name": "Malloc0", 00:24:33.790 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:33.790 "eui64": "ABCDEF0123456789", 00:24:33.790 "uuid": "8c08592c-7114-43ed-847a-cf6d494868ab" 00:24:33.790 } 00:24:33.790 ] 00:24:33.790 } 00:24:33.790 ] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.790 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:33.790 [2024-07-15 11:36:02.377624] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:33.790 [2024-07-15 11:36:02.377664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645344 ] 00:24:33.790 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.790 [2024-07-15 11:36:02.410787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:33.790 [2024-07-15 11:36:02.410844] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:33.790 [2024-07-15 11:36:02.410849] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:33.790 [2024-07-15 11:36:02.410860] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:33.790 [2024-07-15 11:36:02.410866] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:33.790 [2024-07-15 11:36:02.414152] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:33.790 [2024-07-15 11:36:02.414182] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1834ec0 0 00:24:33.790 [2024-07-15 11:36:02.422133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:33.790 [2024-07-15 11:36:02.422144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:33.790 [2024-07-15 11:36:02.422149] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:33.790 [2024-07-15 11:36:02.422152] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:33.790 [2024-07-15 11:36:02.422190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.790 [2024-07-15 11:36:02.422196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.790 [2024-07-15 11:36:02.422200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.790 [2024-07-15 11:36:02.422213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:33.790 [2024-07-15 11:36:02.422229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.790 [2024-07-15 11:36:02.429132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.790 [2024-07-15 11:36:02.429140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.790 [2024-07-15 11:36:02.429144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.790 [2024-07-15 11:36:02.429148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.429159] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:33.791 [2024-07-15 11:36:02.429165] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:33.791 [2024-07-15 11:36:02.429170] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:33.791 [2024-07-15 11:36:02.429184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.429198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.429215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.429443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.429449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.429453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.429462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:33.791 [2024-07-15 11:36:02.429469] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:33.791 [2024-07-15 11:36:02.429476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.429490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.429500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.429736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.429743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.429746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.429755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:33.791 [2024-07-15 11:36:02.429763] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:33.791 [2024-07-15 11:36:02.429770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.429777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.429783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.429793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.429996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.430002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.430005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.430014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:33.791 [2024-07-15 11:36:02.430023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.430037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.430046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.430255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.430262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.430268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.430277] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:33.791 [2024-07-15 11:36:02.430282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:33.791 [2024-07-15 11:36:02.430289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:33.791 [2024-07-15 11:36:02.430394] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:33.791 [2024-07-15 11:36:02.430399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:33.791 [2024-07-15 11:36:02.430407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.430421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.430432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.430653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.430659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.430663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.430672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:33.791 [2024-07-15 11:36:02.430680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.430694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.430704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.430885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.430892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.430895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.430904] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:33.791 [2024-07-15 11:36:02.430908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:33.791 [2024-07-15 11:36:02.430915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:33.791 [2024-07-15 11:36:02.430929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:33.791 [2024-07-15 11:36:02.430938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.430944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.430951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.791 [2024-07-15 11:36:02.430961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.431270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.791 [2024-07-15 11:36:02.431277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.791 [2024-07-15 11:36:02.431281] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.431285] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1834ec0): datao=0, datal=4096, cccid=0 00:24:33.791 [2024-07-15 11:36:02.431289] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b7e40) on tqpair(0x1834ec0): expected_datao=0, payload_size=4096 00:24:33.791 [2024-07-15 11:36:02.431294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.431356] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.431361] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.472326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.472338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.472342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.472346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.472355] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:33.791 [2024-07-15 11:36:02.472364] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:33.791 [2024-07-15 11:36:02.472368] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:33.791 [2024-07-15 11:36:02.472374] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:33.791 [2024-07-15 11:36:02.472378] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:33.791 [2024-07-15 11:36:02.472383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:33.791 [2024-07-15 11:36:02.472392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:33.791 [2024-07-15 11:36:02.472399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.472403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.472407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.791 [2024-07-15 11:36:02.472414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:33.791 [2024-07-15 11:36:02.472427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.791 [2024-07-15 11:36:02.472589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.791 [2024-07-15 11:36:02.472596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.791 [2024-07-15 11:36:02.472599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.791 [2024-07-15 11:36:02.472603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:33.791 [2024-07-15 11:36:02.472611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.472627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.792 [2024-07-15 11:36:02.472633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.472646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.792 [2024-07-15 11:36:02.472652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.472665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.792 [2024-07-15 11:36:02.472671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.472683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.792 [2024-07-15 11:36:02.472688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:33.792 [2024-07-15 11:36:02.472699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:33.792 [2024-07-15 11:36:02.472705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.472715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.792 [2024-07-15 11:36:02.472727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7e40, cid 0, qid 0 00:24:33.792 [2024-07-15 11:36:02.472732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b7fc0, cid 1, qid 0 00:24:33.792 [2024-07-15 11:36:02.472736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b8140, cid 2, qid 0 00:24:33.792 [2024-07-15 11:36:02.472741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:33.792 [2024-07-15 11:36:02.472746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b8440, cid 4, qid 0 00:24:33.792 [2024-07-15 11:36:02.472980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.792 [2024-07-15 11:36:02.472986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.792 [2024-07-15 11:36:02.472990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.472994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b8440) on tqpair=0x1834ec0 00:24:33.792 [2024-07-15 11:36:02.472999] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:33.792 [2024-07-15 11:36:02.473004] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:33.792 [2024-07-15 11:36:02.473015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.473019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.473025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.792 [2024-07-15 11:36:02.473038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b8440, cid 4, qid 0 00:24:33.792 [2024-07-15 11:36:02.477129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.792 [2024-07-15 11:36:02.477137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.792 [2024-07-15 11:36:02.477140] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477144] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1834ec0): datao=0, datal=4096, cccid=4 00:24:33.792 [2024-07-15 11:36:02.477148] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b8440) on tqpair(0x1834ec0): expected_datao=0, payload_size=4096 00:24:33.792 [2024-07-15 11:36:02.477152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477159] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477163] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.792 [2024-07-15 11:36:02.477174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.792 [2024-07-15 11:36:02.477178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b8440) on tqpair=0x1834ec0 00:24:33.792 [2024-07-15 11:36:02.477194] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:33.792 [2024-07-15 11:36:02.477216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.477227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.792 [2024-07-15 11:36:02.477234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1834ec0) 00:24:33.792 [2024-07-15 11:36:02.477247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.792 [2024-07-15 11:36:02.477261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b8440, cid 4, qid 0 00:24:33.792 [2024-07-15 11:36:02.477267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b85c0, cid 5, qid 0 00:24:33.792 [2024-07-15 11:36:02.477531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.792 [2024-07-15 11:36:02.477537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.792 [2024-07-15 11:36:02.477541] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477544] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1834ec0): datao=0, datal=1024, cccid=4 00:24:33.792 [2024-07-15 11:36:02.477549] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b8440) on tqpair(0x1834ec0): expected_datao=0, payload_size=1024 00:24:33.792 [2024-07-15 11:36:02.477553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477559] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477563] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.792 [2024-07-15 11:36:02.477574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.792 [2024-07-15 11:36:02.477578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.792 [2024-07-15 11:36:02.477582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b85c0) on tqpair=0x1834ec0 00:24:34.058 [2024-07-15 11:36:02.519304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.058 [2024-07-15 11:36:02.519316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.058 [2024-07-15 11:36:02.519323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b8440) on tqpair=0x1834ec0 00:24:34.058 [2024-07-15 11:36:02.519345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1834ec0) 00:24:34.058 [2024-07-15 11:36:02.519356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-07-15 11:36:02.519372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b8440, cid 4, qid 0 00:24:34.058 [2024-07-15 11:36:02.519586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.058 [2024-07-15 11:36:02.519593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.058 [2024-07-15 11:36:02.519597] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519600] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1834ec0): datao=0, datal=3072, cccid=4 00:24:34.058 [2024-07-15 11:36:02.519604] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b8440) on tqpair(0x1834ec0): expected_datao=0, payload_size=3072 00:24:34.058 [2024-07-15 11:36:02.519609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519616] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519619] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.058 [2024-07-15 11:36:02.519847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.058 [2024-07-15 11:36:02.519851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b8440) on tqpair=0x1834ec0 00:24:34.058 [2024-07-15 11:36:02.519862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.519866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1834ec0) 00:24:34.058 [2024-07-15 11:36:02.519873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-07-15 11:36:02.519886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b8440, cid 4, qid 0 00:24:34.058 [2024-07-15 11:36:02.520101] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.058 [2024-07-15 11:36:02.520107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.058 [2024-07-15 11:36:02.520111] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.520114] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1834ec0): datao=0, datal=8, cccid=4 00:24:34.058 [2024-07-15 11:36:02.520119] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18b8440) on tqpair(0x1834ec0): expected_datao=0, payload_size=8 00:24:34.058 [2024-07-15 11:36:02.520128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.520134] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.520138] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.565131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.058 [2024-07-15 11:36:02.565140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.058 [2024-07-15 11:36:02.565144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.058 [2024-07-15 11:36:02.565148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b8440) on tqpair=0x1834ec0 00:24:34.058 ===================================================== 00:24:34.058 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:34.058 ===================================================== 00:24:34.058 Controller Capabilities/Features 00:24:34.058 ================================ 00:24:34.058 Vendor ID: 0000 00:24:34.058 Subsystem Vendor ID: 0000 00:24:34.058 Serial Number: .................... 00:24:34.058 Model Number: ........................................ 00:24:34.058 Firmware Version: 24.09 00:24:34.058 Recommended Arb Burst: 0 00:24:34.058 IEEE OUI Identifier: 00 00 00 00:24:34.058 Multi-path I/O 00:24:34.058 May have multiple subsystem ports: No 00:24:34.058 May have multiple controllers: No 00:24:34.058 Associated with SR-IOV VF: No 00:24:34.058 Max Data Transfer Size: 131072 00:24:34.058 Max Number of Namespaces: 0 00:24:34.058 Max Number of I/O Queues: 1024 00:24:34.058 NVMe Specification Version (VS): 1.3 00:24:34.059 NVMe Specification Version (Identify): 1.3 00:24:34.059 Maximum Queue Entries: 128 00:24:34.059 Contiguous Queues Required: Yes 00:24:34.059 Arbitration Mechanisms Supported 00:24:34.059 Weighted Round Robin: Not Supported 00:24:34.059 Vendor Specific: Not Supported 00:24:34.059 Reset Timeout: 15000 ms 00:24:34.059 Doorbell Stride: 4 bytes 00:24:34.059 NVM Subsystem Reset: Not Supported 00:24:34.059 Command Sets Supported 00:24:34.059 NVM Command Set: Supported 00:24:34.059 Boot Partition: Not Supported 00:24:34.059 Memory Page Size Minimum: 4096 bytes 00:24:34.059 Memory Page Size Maximum: 4096 bytes 00:24:34.059 Persistent Memory Region: Not Supported 00:24:34.059 Optional Asynchronous Events Supported 00:24:34.059 Namespace Attribute Notices: Not Supported 00:24:34.059 Firmware Activation Notices: Not Supported 00:24:34.059 ANA Change Notices: Not Supported 00:24:34.059 PLE Aggregate Log Change Notices: Not Supported 00:24:34.059 LBA Status Info Alert Notices: Not Supported 00:24:34.059 EGE Aggregate Log Change Notices: Not Supported 00:24:34.059 Normal NVM Subsystem Shutdown event: Not Supported 00:24:34.059 Zone Descriptor Change Notices: Not Supported 00:24:34.059 Discovery Log Change Notices: Supported 00:24:34.059 Controller Attributes 00:24:34.059 128-bit Host Identifier: Not Supported 00:24:34.059 Non-Operational Permissive Mode: Not Supported 00:24:34.059 NVM Sets: Not Supported 00:24:34.059 Read Recovery Levels: Not Supported 00:24:34.059 Endurance Groups: Not Supported 00:24:34.059 Predictable Latency Mode: Not Supported 00:24:34.059 Traffic Based Keep ALive: Not Supported 00:24:34.059 Namespace Granularity: Not Supported 00:24:34.059 SQ Associations: Not Supported 00:24:34.059 UUID List: Not Supported 00:24:34.059 Multi-Domain Subsystem: Not Supported 00:24:34.059 Fixed Capacity Management: Not Supported 00:24:34.059 Variable Capacity Management: Not Supported 00:24:34.059 Delete Endurance Group: Not Supported 00:24:34.059 Delete NVM Set: Not Supported 00:24:34.059 Extended LBA Formats Supported: Not Supported 00:24:34.059 Flexible Data Placement Supported: Not Supported 00:24:34.059 00:24:34.059 Controller Memory Buffer Support 00:24:34.059 ================================ 00:24:34.059 Supported: No 00:24:34.059 00:24:34.059 Persistent Memory Region Support 00:24:34.059 ================================ 00:24:34.059 Supported: No 00:24:34.059 00:24:34.059 Admin Command Set Attributes 00:24:34.059 ============================ 00:24:34.059 Security Send/Receive: Not Supported 00:24:34.059 Format NVM: Not Supported 00:24:34.059 Firmware Activate/Download: Not Supported 00:24:34.059 Namespace Management: Not Supported 00:24:34.059 Device Self-Test: Not Supported 00:24:34.059 Directives: Not Supported 00:24:34.059 NVMe-MI: Not Supported 00:24:34.059 Virtualization Management: Not Supported 00:24:34.059 Doorbell Buffer Config: Not Supported 00:24:34.059 Get LBA Status Capability: Not Supported 00:24:34.059 Command & Feature Lockdown Capability: Not Supported 00:24:34.059 Abort Command Limit: 1 00:24:34.059 Async Event Request Limit: 4 00:24:34.059 Number of Firmware Slots: N/A 00:24:34.059 Firmware Slot 1 Read-Only: N/A 00:24:34.059 Firmware Activation Without Reset: N/A 00:24:34.059 Multiple Update Detection Support: N/A 00:24:34.059 Firmware Update Granularity: No Information Provided 00:24:34.059 Per-Namespace SMART Log: No 00:24:34.059 Asymmetric Namespace Access Log Page: Not Supported 00:24:34.059 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:34.059 Command Effects Log Page: Not Supported 00:24:34.059 Get Log Page Extended Data: Supported 00:24:34.059 Telemetry Log Pages: Not Supported 00:24:34.059 Persistent Event Log Pages: Not Supported 00:24:34.059 Supported Log Pages Log Page: May Support 00:24:34.059 Commands Supported & Effects Log Page: Not Supported 00:24:34.059 Feature Identifiers & Effects Log Page:May Support 00:24:34.059 NVMe-MI Commands & Effects Log Page: May Support 00:24:34.059 Data Area 4 for Telemetry Log: Not Supported 00:24:34.059 Error Log Page Entries Supported: 128 00:24:34.059 Keep Alive: Not Supported 00:24:34.059 00:24:34.059 NVM Command Set Attributes 00:24:34.059 ========================== 00:24:34.059 Submission Queue Entry Size 00:24:34.059 Max: 1 00:24:34.059 Min: 1 00:24:34.059 Completion Queue Entry Size 00:24:34.059 Max: 1 00:24:34.059 Min: 1 00:24:34.059 Number of Namespaces: 0 00:24:34.059 Compare Command: Not Supported 00:24:34.059 Write Uncorrectable Command: Not Supported 00:24:34.059 Dataset Management Command: Not Supported 00:24:34.059 Write Zeroes Command: Not Supported 00:24:34.059 Set Features Save Field: Not Supported 00:24:34.059 Reservations: Not Supported 00:24:34.059 Timestamp: Not Supported 00:24:34.059 Copy: Not Supported 00:24:34.059 Volatile Write Cache: Not Present 00:24:34.059 Atomic Write Unit (Normal): 1 00:24:34.059 Atomic Write Unit (PFail): 1 00:24:34.059 Atomic Compare & Write Unit: 1 00:24:34.059 Fused Compare & Write: Supported 00:24:34.059 Scatter-Gather List 00:24:34.059 SGL Command Set: Supported 00:24:34.059 SGL Keyed: Supported 00:24:34.059 SGL Bit Bucket Descriptor: Not Supported 00:24:34.059 SGL Metadata Pointer: Not Supported 00:24:34.059 Oversized SGL: Not Supported 00:24:34.059 SGL Metadata Address: Not Supported 00:24:34.059 SGL Offset: Supported 00:24:34.059 Transport SGL Data Block: Not Supported 00:24:34.059 Replay Protected Memory Block: Not Supported 00:24:34.059 00:24:34.059 Firmware Slot Information 00:24:34.059 ========================= 00:24:34.059 Active slot: 0 00:24:34.059 00:24:34.059 00:24:34.059 Error Log 00:24:34.059 ========= 00:24:34.059 00:24:34.059 Active Namespaces 00:24:34.059 ================= 00:24:34.059 Discovery Log Page 00:24:34.059 ================== 00:24:34.059 Generation Counter: 2 00:24:34.059 Number of Records: 2 00:24:34.059 Record Format: 0 00:24:34.059 00:24:34.059 Discovery Log Entry 0 00:24:34.059 ---------------------- 00:24:34.059 Transport Type: 3 (TCP) 00:24:34.059 Address Family: 1 (IPv4) 00:24:34.059 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:34.059 Entry Flags: 00:24:34.059 Duplicate Returned Information: 1 00:24:34.059 Explicit Persistent Connection Support for Discovery: 1 00:24:34.059 Transport Requirements: 00:24:34.059 Secure Channel: Not Required 00:24:34.059 Port ID: 0 (0x0000) 00:24:34.059 Controller ID: 65535 (0xffff) 00:24:34.059 Admin Max SQ Size: 128 00:24:34.059 Transport Service Identifier: 4420 00:24:34.059 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:34.059 Transport Address: 10.0.0.2 00:24:34.059 Discovery Log Entry 1 00:24:34.059 ---------------------- 00:24:34.059 Transport Type: 3 (TCP) 00:24:34.059 Address Family: 1 (IPv4) 00:24:34.059 Subsystem Type: 2 (NVM Subsystem) 00:24:34.059 Entry Flags: 00:24:34.059 Duplicate Returned Information: 0 00:24:34.059 Explicit Persistent Connection Support for Discovery: 0 00:24:34.059 Transport Requirements: 00:24:34.059 Secure Channel: Not Required 00:24:34.059 Port ID: 0 (0x0000) 00:24:34.059 Controller ID: 65535 (0xffff) 00:24:34.059 Admin Max SQ Size: 128 00:24:34.059 Transport Service Identifier: 4420 00:24:34.059 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:34.059 Transport Address: 10.0.0.2 [2024-07-15 11:36:02.565230] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:34.059 [2024-07-15 11:36:02.565241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7e40) on tqpair=0x1834ec0 00:24:34.059 [2024-07-15 11:36:02.565249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.059 [2024-07-15 11:36:02.565255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b7fc0) on tqpair=0x1834ec0 00:24:34.059 [2024-07-15 11:36:02.565259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.059 [2024-07-15 11:36:02.565264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b8140) on tqpair=0x1834ec0 00:24:34.059 [2024-07-15 11:36:02.565269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.059 [2024-07-15 11:36:02.565273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.059 [2024-07-15 11:36:02.565278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.059 [2024-07-15 11:36:02.565288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.059 [2024-07-15 11:36:02.565292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.059 [2024-07-15 11:36:02.565295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.059 [2024-07-15 11:36:02.565303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.059 [2024-07-15 11:36:02.565317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.059 [2024-07-15 11:36:02.565546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.059 [2024-07-15 11:36:02.565552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.059 [2024-07-15 11:36:02.565556] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.059 [2024-07-15 11:36:02.565559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.059 [2024-07-15 11:36:02.565567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.059 [2024-07-15 11:36:02.565570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.059 [2024-07-15 11:36:02.565574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.565580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.565594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.565817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.565823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.565826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.565830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.565835] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:34.060 [2024-07-15 11:36:02.565840] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:34.060 [2024-07-15 11:36:02.565849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.565852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.565856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.565862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.565872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.566070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.566076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.566082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.566095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.566109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.566119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.566314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.566320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.566324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.566337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.566351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.566361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.566582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.566589] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.566592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.566605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.566619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.566628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.566851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.566857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.566861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.566874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.566881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.566888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.566897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.567085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.567091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.567094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.567109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.567127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.567138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.567359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.567366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.567369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.567382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.567396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.567406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.567627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.567634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.567637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.567650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.567664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.567674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.567854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.567861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.567864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.567877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.567885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.567891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.567901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.568081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.568088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.568091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568095] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.568106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.568120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.568135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.568350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.568357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.568360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.568373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.568387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.568397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.568618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.060 [2024-07-15 11:36:02.568624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.060 [2024-07-15 11:36:02.568628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.060 [2024-07-15 11:36:02.568641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.060 [2024-07-15 11:36:02.568648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.060 [2024-07-15 11:36:02.568655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.060 [2024-07-15 11:36:02.568664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.060 [2024-07-15 11:36:02.568962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.568968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.568971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.568975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.061 [2024-07-15 11:36:02.568984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.568988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.568992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.061 [2024-07-15 11:36:02.568998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.569008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.061 [2024-07-15 11:36:02.573129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.573138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.573142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.573146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.061 [2024-07-15 11:36:02.573155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.573165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.573168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1834ec0) 00:24:34.061 [2024-07-15 11:36:02.573175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.573187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18b82c0, cid 3, qid 0 00:24:34.061 [2024-07-15 11:36:02.573405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.573412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.573415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.573419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18b82c0) on tqpair=0x1834ec0 00:24:34.061 [2024-07-15 11:36:02.573426] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:34.061 00:24:34.061 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:34.061 [2024-07-15 11:36:02.613325] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:34.061 [2024-07-15 11:36:02.613367] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645352 ] 00:24:34.061 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.061 [2024-07-15 11:36:02.645659] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:34.061 [2024-07-15 11:36:02.645706] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:34.061 [2024-07-15 11:36:02.645711] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:34.061 [2024-07-15 11:36:02.645721] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:34.061 [2024-07-15 11:36:02.645727] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:34.061 [2024-07-15 11:36:02.649147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:34.061 [2024-07-15 11:36:02.649171] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16f7ec0 0 00:24:34.061 [2024-07-15 11:36:02.657135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:34.061 [2024-07-15 11:36:02.657148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:34.061 [2024-07-15 11:36:02.657152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:34.061 [2024-07-15 11:36:02.657155] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:34.061 [2024-07-15 11:36:02.657186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.657191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.657195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.061 [2024-07-15 11:36:02.657207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:34.061 [2024-07-15 11:36:02.657223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.061 [2024-07-15 11:36:02.665132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.665141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.665147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.061 [2024-07-15 11:36:02.665160] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:34.061 [2024-07-15 11:36:02.665167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:34.061 [2024-07-15 11:36:02.665172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:34.061 [2024-07-15 11:36:02.665183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.061 [2024-07-15 11:36:02.665198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.665211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.061 [2024-07-15 11:36:02.665432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.665439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.665443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.061 [2024-07-15 11:36:02.665452] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:34.061 [2024-07-15 11:36:02.665459] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:34.061 [2024-07-15 11:36:02.665465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.061 [2024-07-15 11:36:02.665479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.665490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.061 [2024-07-15 11:36:02.665754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.665760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.665764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.061 [2024-07-15 11:36:02.665773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:34.061 [2024-07-15 11:36:02.665781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:34.061 [2024-07-15 11:36:02.665787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.665794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.061 [2024-07-15 11:36:02.665801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.665811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.061 [2024-07-15 11:36:02.666038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.666044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.666047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.061 [2024-07-15 11:36:02.666058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:34.061 [2024-07-15 11:36:02.666067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.061 [2024-07-15 11:36:02.666081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.666091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.061 [2024-07-15 11:36:02.666270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.666277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.666280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.061 [2024-07-15 11:36:02.666288] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:34.061 [2024-07-15 11:36:02.666293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:34.061 [2024-07-15 11:36:02.666300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:34.061 [2024-07-15 11:36:02.666405] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:34.061 [2024-07-15 11:36:02.666409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:34.061 [2024-07-15 11:36:02.666417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.061 [2024-07-15 11:36:02.666431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.061 [2024-07-15 11:36:02.666442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.061 [2024-07-15 11:36:02.666653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.061 [2024-07-15 11:36:02.666659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.061 [2024-07-15 11:36:02.666663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.061 [2024-07-15 11:36:02.666671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:34.061 [2024-07-15 11:36:02.666680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.061 [2024-07-15 11:36:02.666684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.666687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.666694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.062 [2024-07-15 11:36:02.666704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.062 [2024-07-15 11:36:02.666920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.062 [2024-07-15 11:36:02.666926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.062 [2024-07-15 11:36:02.666931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.666935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.062 [2024-07-15 11:36:02.666940] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:34.062 [2024-07-15 11:36:02.666944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.666952] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:34.062 [2024-07-15 11:36:02.666965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.666974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.666978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.666985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.062 [2024-07-15 11:36:02.666995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.062 [2024-07-15 11:36:02.667279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.062 [2024-07-15 11:36:02.667287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.062 [2024-07-15 11:36:02.667291] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667294] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=4096, cccid=0 00:24:34.062 [2024-07-15 11:36:02.667299] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177ae40) on tqpair(0x16f7ec0): expected_datao=0, payload_size=4096 00:24:34.062 [2024-07-15 11:36:02.667303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667311] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667314] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.062 [2024-07-15 11:36:02.667466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.062 [2024-07-15 11:36:02.667469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.062 [2024-07-15 11:36:02.667481] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:34.062 [2024-07-15 11:36:02.667488] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:34.062 [2024-07-15 11:36:02.667493] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:34.062 [2024-07-15 11:36:02.667497] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:34.062 [2024-07-15 11:36:02.667501] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:34.062 [2024-07-15 11:36:02.667506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.667514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.667520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.667535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.062 [2024-07-15 11:36:02.667548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.062 [2024-07-15 11:36:02.667772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.062 [2024-07-15 11:36:02.667778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.062 [2024-07-15 11:36:02.667781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.062 [2024-07-15 11:36:02.667792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.667805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.062 [2024-07-15 11:36:02.667811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.667824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.062 [2024-07-15 11:36:02.667830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.667842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.062 [2024-07-15 11:36:02.667848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.667861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.062 [2024-07-15 11:36:02.667865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.667875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.667882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.667885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.667892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.062 [2024-07-15 11:36:02.667903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177ae40, cid 0, qid 0 00:24:34.062 [2024-07-15 11:36:02.667908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177afc0, cid 1, qid 0 00:24:34.062 [2024-07-15 11:36:02.667913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b140, cid 2, qid 0 00:24:34.062 [2024-07-15 11:36:02.667918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.062 [2024-07-15 11:36:02.667922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.062 [2024-07-15 11:36:02.668166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.062 [2024-07-15 11:36:02.668173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.062 [2024-07-15 11:36:02.668176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.668182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.062 [2024-07-15 11:36:02.668186] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:34.062 [2024-07-15 11:36:02.668191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.668200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.668206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:34.062 [2024-07-15 11:36:02.668212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.668216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.668219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.062 [2024-07-15 11:36:02.668226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.062 [2024-07-15 11:36:02.668236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.062 [2024-07-15 11:36:02.668415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.062 [2024-07-15 11:36:02.668421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.062 [2024-07-15 11:36:02.668425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.062 [2024-07-15 11:36:02.668428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.063 [2024-07-15 11:36:02.668491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:34.063 [2024-07-15 11:36:02.668500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:34.063 [2024-07-15 11:36:02.668507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.668511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.063 [2024-07-15 11:36:02.668517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.063 [2024-07-15 11:36:02.668527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.063 [2024-07-15 11:36:02.668782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.063 [2024-07-15 11:36:02.668788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.063 [2024-07-15 11:36:02.668792] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.668795] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=4096, cccid=4 00:24:34.063 [2024-07-15 11:36:02.668800] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b440) on tqpair(0x16f7ec0): expected_datao=0, payload_size=4096 00:24:34.063 [2024-07-15 11:36:02.668804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.668811] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.668814] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.713129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.063 [2024-07-15 11:36:02.713138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.063 [2024-07-15 11:36:02.713142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.713145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.063 [2024-07-15 11:36:02.713155] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:34.063 [2024-07-15 11:36:02.713168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:34.063 [2024-07-15 11:36:02.713177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:34.063 [2024-07-15 11:36:02.713184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.713188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.063 [2024-07-15 11:36:02.713194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.063 [2024-07-15 11:36:02.713206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.063 [2024-07-15 11:36:02.713436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.063 [2024-07-15 11:36:02.713442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.063 [2024-07-15 11:36:02.713446] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.713450] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=4096, cccid=4 00:24:34.063 [2024-07-15 11:36:02.713454] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b440) on tqpair(0x16f7ec0): expected_datao=0, payload_size=4096 00:24:34.063 [2024-07-15 11:36:02.713458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.713498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.063 [2024-07-15 11:36:02.713502] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.755330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.755340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.755344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.755348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.755362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.755371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.755379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.755383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.755390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.755402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.325 [2024-07-15 11:36:02.755579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.325 [2024-07-15 11:36:02.755586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.325 [2024-07-15 11:36:02.755589] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.755593] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=4096, cccid=4 00:24:34.325 [2024-07-15 11:36:02.755597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b440) on tqpair(0x16f7ec0): expected_datao=0, payload_size=4096 00:24:34.325 [2024-07-15 11:36:02.755601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.755710] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.755714] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.801139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.801148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.801160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801198] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:34.325 [2024-07-15 11:36:02.801203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:34.325 [2024-07-15 11:36:02.801208] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:34.325 [2024-07-15 11:36:02.801222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.801232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.801239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.801252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.325 [2024-07-15 11:36:02.801267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.325 [2024-07-15 11:36:02.801272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b5c0, cid 5, qid 0 00:24:34.325 [2024-07-15 11:36:02.801372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.801378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.801381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.801392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.801397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.801401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b5c0) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.801413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.801423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.801433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b5c0, cid 5, qid 0 00:24:34.325 [2024-07-15 11:36:02.801608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.801614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.801618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b5c0) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.801630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.801640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.801650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b5c0, cid 5, qid 0 00:24:34.325 [2024-07-15 11:36:02.801876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.801883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.801886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b5c0) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.801898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.801902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.801908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.801918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b5c0, cid 5, qid 0 00:24:34.325 [2024-07-15 11:36:02.802120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.325 [2024-07-15 11:36:02.802130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.325 [2024-07-15 11:36:02.802134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.802137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b5c0) on tqpair=0x16f7ec0 00:24:34.325 [2024-07-15 11:36:02.802152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.802156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.802162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.802169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.802173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.802179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.802186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.802190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.802196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.802204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.802207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16f7ec0) 00:24:34.325 [2024-07-15 11:36:02.802213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.325 [2024-07-15 11:36:02.802225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b5c0, cid 5, qid 0 00:24:34.325 [2024-07-15 11:36:02.802231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b440, cid 4, qid 0 00:24:34.325 [2024-07-15 11:36:02.802236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b740, cid 6, qid 0 00:24:34.325 [2024-07-15 11:36:02.802241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b8c0, cid 7, qid 0 00:24:34.325 [2024-07-15 11:36:02.802497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.325 [2024-07-15 11:36:02.802504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.325 [2024-07-15 11:36:02.802507] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.325 [2024-07-15 11:36:02.802511] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=8192, cccid=5 00:24:34.326 [2024-07-15 11:36:02.802515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b5c0) on tqpair(0x16f7ec0): expected_datao=0, payload_size=8192 00:24:34.326 [2024-07-15 11:36:02.802519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802648] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802652] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.326 [2024-07-15 11:36:02.802663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.326 [2024-07-15 11:36:02.802667] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802670] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=512, cccid=4 00:24:34.326 [2024-07-15 11:36:02.802675] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b440) on tqpair(0x16f7ec0): expected_datao=0, payload_size=512 00:24:34.326 [2024-07-15 11:36:02.802679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802685] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802688] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.326 [2024-07-15 11:36:02.802700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.326 [2024-07-15 11:36:02.802703] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802706] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=512, cccid=6 00:24:34.326 [2024-07-15 11:36:02.802711] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b740) on tqpair(0x16f7ec0): expected_datao=0, payload_size=512 00:24:34.326 [2024-07-15 11:36:02.802715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802721] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802724] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:34.326 [2024-07-15 11:36:02.802736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:34.326 [2024-07-15 11:36:02.802739] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802742] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16f7ec0): datao=0, datal=4096, cccid=7 00:24:34.326 [2024-07-15 11:36:02.802746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x177b8c0) on tqpair(0x16f7ec0): expected_datao=0, payload_size=4096 00:24:34.326 [2024-07-15 11:36:02.802751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802757] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.802761] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.848131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.326 [2024-07-15 11:36:02.848141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.326 [2024-07-15 11:36:02.848145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.848152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b5c0) on tqpair=0x16f7ec0 00:24:34.326 [2024-07-15 11:36:02.848165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.326 [2024-07-15 11:36:02.848171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.326 [2024-07-15 11:36:02.848174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.848178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b440) on tqpair=0x16f7ec0 00:24:34.326 [2024-07-15 11:36:02.848187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.326 [2024-07-15 11:36:02.848193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.326 [2024-07-15 11:36:02.848196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.848200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b740) on tqpair=0x16f7ec0 00:24:34.326 [2024-07-15 11:36:02.848207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.326 [2024-07-15 11:36:02.848213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.326 [2024-07-15 11:36:02.848216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.326 [2024-07-15 11:36:02.848220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b8c0) on tqpair=0x16f7ec0 00:24:34.326 ===================================================== 00:24:34.326 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.326 ===================================================== 00:24:34.326 Controller Capabilities/Features 00:24:34.326 ================================ 00:24:34.326 Vendor ID: 8086 00:24:34.326 Subsystem Vendor ID: 8086 00:24:34.326 Serial Number: SPDK00000000000001 00:24:34.326 Model Number: SPDK bdev Controller 00:24:34.326 Firmware Version: 24.09 00:24:34.326 Recommended Arb Burst: 6 00:24:34.326 IEEE OUI Identifier: e4 d2 5c 00:24:34.326 Multi-path I/O 00:24:34.326 May have multiple subsystem ports: Yes 00:24:34.326 May have multiple controllers: Yes 00:24:34.326 Associated with SR-IOV VF: No 00:24:34.326 Max Data Transfer Size: 131072 00:24:34.326 Max Number of Namespaces: 32 00:24:34.326 Max Number of I/O Queues: 127 00:24:34.326 NVMe Specification Version (VS): 1.3 00:24:34.326 NVMe Specification Version (Identify): 1.3 00:24:34.326 Maximum Queue Entries: 128 00:24:34.326 Contiguous Queues Required: Yes 00:24:34.326 Arbitration Mechanisms Supported 00:24:34.326 Weighted Round Robin: Not Supported 00:24:34.326 Vendor Specific: Not Supported 00:24:34.326 Reset Timeout: 15000 ms 00:24:34.326 Doorbell Stride: 4 bytes 00:24:34.326 NVM Subsystem Reset: Not Supported 00:24:34.326 Command Sets Supported 00:24:34.326 NVM Command Set: Supported 00:24:34.326 Boot Partition: Not Supported 00:24:34.326 Memory Page Size Minimum: 4096 bytes 00:24:34.326 Memory Page Size Maximum: 4096 bytes 00:24:34.326 Persistent Memory Region: Not Supported 00:24:34.326 Optional Asynchronous Events Supported 00:24:34.326 Namespace Attribute Notices: Supported 00:24:34.326 Firmware Activation Notices: Not Supported 00:24:34.326 ANA Change Notices: Not Supported 00:24:34.326 PLE Aggregate Log Change Notices: Not Supported 00:24:34.326 LBA Status Info Alert Notices: Not Supported 00:24:34.326 EGE Aggregate Log Change Notices: Not Supported 00:24:34.326 Normal NVM Subsystem Shutdown event: Not Supported 00:24:34.326 Zone Descriptor Change Notices: Not Supported 00:24:34.326 Discovery Log Change Notices: Not Supported 00:24:34.326 Controller Attributes 00:24:34.326 128-bit Host Identifier: Supported 00:24:34.326 Non-Operational Permissive Mode: Not Supported 00:24:34.326 NVM Sets: Not Supported 00:24:34.326 Read Recovery Levels: Not Supported 00:24:34.326 Endurance Groups: Not Supported 00:24:34.326 Predictable Latency Mode: Not Supported 00:24:34.326 Traffic Based Keep ALive: Not Supported 00:24:34.326 Namespace Granularity: Not Supported 00:24:34.326 SQ Associations: Not Supported 00:24:34.326 UUID List: Not Supported 00:24:34.326 Multi-Domain Subsystem: Not Supported 00:24:34.326 Fixed Capacity Management: Not Supported 00:24:34.326 Variable Capacity Management: Not Supported 00:24:34.326 Delete Endurance Group: Not Supported 00:24:34.326 Delete NVM Set: Not Supported 00:24:34.326 Extended LBA Formats Supported: Not Supported 00:24:34.326 Flexible Data Placement Supported: Not Supported 00:24:34.326 00:24:34.326 Controller Memory Buffer Support 00:24:34.326 ================================ 00:24:34.326 Supported: No 00:24:34.326 00:24:34.326 Persistent Memory Region Support 00:24:34.326 ================================ 00:24:34.326 Supported: No 00:24:34.326 00:24:34.326 Admin Command Set Attributes 00:24:34.326 ============================ 00:24:34.326 Security Send/Receive: Not Supported 00:24:34.326 Format NVM: Not Supported 00:24:34.326 Firmware Activate/Download: Not Supported 00:24:34.326 Namespace Management: Not Supported 00:24:34.326 Device Self-Test: Not Supported 00:24:34.326 Directives: Not Supported 00:24:34.326 NVMe-MI: Not Supported 00:24:34.326 Virtualization Management: Not Supported 00:24:34.326 Doorbell Buffer Config: Not Supported 00:24:34.326 Get LBA Status Capability: Not Supported 00:24:34.326 Command & Feature Lockdown Capability: Not Supported 00:24:34.326 Abort Command Limit: 4 00:24:34.326 Async Event Request Limit: 4 00:24:34.326 Number of Firmware Slots: N/A 00:24:34.326 Firmware Slot 1 Read-Only: N/A 00:24:34.326 Firmware Activation Without Reset: N/A 00:24:34.326 Multiple Update Detection Support: N/A 00:24:34.326 Firmware Update Granularity: No Information Provided 00:24:34.326 Per-Namespace SMART Log: No 00:24:34.326 Asymmetric Namespace Access Log Page: Not Supported 00:24:34.326 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:34.326 Command Effects Log Page: Supported 00:24:34.326 Get Log Page Extended Data: Supported 00:24:34.326 Telemetry Log Pages: Not Supported 00:24:34.326 Persistent Event Log Pages: Not Supported 00:24:34.326 Supported Log Pages Log Page: May Support 00:24:34.326 Commands Supported & Effects Log Page: Not Supported 00:24:34.326 Feature Identifiers & Effects Log Page:May Support 00:24:34.326 NVMe-MI Commands & Effects Log Page: May Support 00:24:34.326 Data Area 4 for Telemetry Log: Not Supported 00:24:34.326 Error Log Page Entries Supported: 128 00:24:34.326 Keep Alive: Supported 00:24:34.326 Keep Alive Granularity: 10000 ms 00:24:34.326 00:24:34.326 NVM Command Set Attributes 00:24:34.326 ========================== 00:24:34.326 Submission Queue Entry Size 00:24:34.326 Max: 64 00:24:34.326 Min: 64 00:24:34.326 Completion Queue Entry Size 00:24:34.326 Max: 16 00:24:34.326 Min: 16 00:24:34.326 Number of Namespaces: 32 00:24:34.326 Compare Command: Supported 00:24:34.326 Write Uncorrectable Command: Not Supported 00:24:34.326 Dataset Management Command: Supported 00:24:34.326 Write Zeroes Command: Supported 00:24:34.326 Set Features Save Field: Not Supported 00:24:34.326 Reservations: Supported 00:24:34.326 Timestamp: Not Supported 00:24:34.326 Copy: Supported 00:24:34.326 Volatile Write Cache: Present 00:24:34.326 Atomic Write Unit (Normal): 1 00:24:34.326 Atomic Write Unit (PFail): 1 00:24:34.326 Atomic Compare & Write Unit: 1 00:24:34.327 Fused Compare & Write: Supported 00:24:34.327 Scatter-Gather List 00:24:34.327 SGL Command Set: Supported 00:24:34.327 SGL Keyed: Supported 00:24:34.327 SGL Bit Bucket Descriptor: Not Supported 00:24:34.327 SGL Metadata Pointer: Not Supported 00:24:34.327 Oversized SGL: Not Supported 00:24:34.327 SGL Metadata Address: Not Supported 00:24:34.327 SGL Offset: Supported 00:24:34.327 Transport SGL Data Block: Not Supported 00:24:34.327 Replay Protected Memory Block: Not Supported 00:24:34.327 00:24:34.327 Firmware Slot Information 00:24:34.327 ========================= 00:24:34.327 Active slot: 1 00:24:34.327 Slot 1 Firmware Revision: 24.09 00:24:34.327 00:24:34.327 00:24:34.327 Commands Supported and Effects 00:24:34.327 ============================== 00:24:34.327 Admin Commands 00:24:34.327 -------------- 00:24:34.327 Get Log Page (02h): Supported 00:24:34.327 Identify (06h): Supported 00:24:34.327 Abort (08h): Supported 00:24:34.327 Set Features (09h): Supported 00:24:34.327 Get Features (0Ah): Supported 00:24:34.327 Asynchronous Event Request (0Ch): Supported 00:24:34.327 Keep Alive (18h): Supported 00:24:34.327 I/O Commands 00:24:34.327 ------------ 00:24:34.327 Flush (00h): Supported LBA-Change 00:24:34.327 Write (01h): Supported LBA-Change 00:24:34.327 Read (02h): Supported 00:24:34.327 Compare (05h): Supported 00:24:34.327 Write Zeroes (08h): Supported LBA-Change 00:24:34.327 Dataset Management (09h): Supported LBA-Change 00:24:34.327 Copy (19h): Supported LBA-Change 00:24:34.327 00:24:34.327 Error Log 00:24:34.327 ========= 00:24:34.327 00:24:34.327 Arbitration 00:24:34.327 =========== 00:24:34.327 Arbitration Burst: 1 00:24:34.327 00:24:34.327 Power Management 00:24:34.327 ================ 00:24:34.327 Number of Power States: 1 00:24:34.327 Current Power State: Power State #0 00:24:34.327 Power State #0: 00:24:34.327 Max Power: 0.00 W 00:24:34.327 Non-Operational State: Operational 00:24:34.327 Entry Latency: Not Reported 00:24:34.327 Exit Latency: Not Reported 00:24:34.327 Relative Read Throughput: 0 00:24:34.327 Relative Read Latency: 0 00:24:34.327 Relative Write Throughput: 0 00:24:34.327 Relative Write Latency: 0 00:24:34.327 Idle Power: Not Reported 00:24:34.327 Active Power: Not Reported 00:24:34.327 Non-Operational Permissive Mode: Not Supported 00:24:34.327 00:24:34.327 Health Information 00:24:34.327 ================== 00:24:34.327 Critical Warnings: 00:24:34.327 Available Spare Space: OK 00:24:34.327 Temperature: OK 00:24:34.327 Device Reliability: OK 00:24:34.327 Read Only: No 00:24:34.327 Volatile Memory Backup: OK 00:24:34.327 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:34.327 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:34.327 Available Spare: 0% 00:24:34.327 Available Spare Threshold: 0% 00:24:34.327 Life Percentage Used:[2024-07-15 11:36:02.848317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.848330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.848342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b8c0, cid 7, qid 0 00:24:34.327 [2024-07-15 11:36:02.848575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.848581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.848584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b8c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.848618] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:34.327 [2024-07-15 11:36:02.848627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177ae40) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.848634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.327 [2024-07-15 11:36:02.848639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177afc0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.848643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.327 [2024-07-15 11:36:02.848648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b140) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.848653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.327 [2024-07-15 11:36:02.848658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.327 [2024-07-15 11:36:02.848670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.848684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.848696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.327 [2024-07-15 11:36:02.848898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.848904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.848908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.848918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.848925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.848932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.848945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.327 [2024-07-15 11:36:02.849127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.849134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.849137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.849145] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:34.327 [2024-07-15 11:36:02.849150] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:34.327 [2024-07-15 11:36:02.849158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.849172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.849183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.327 [2024-07-15 11:36:02.849399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.849405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.849409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.849422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.849436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.849445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.327 [2024-07-15 11:36:02.849714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.849720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.849724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.849737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.849751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.849763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.327 [2024-07-15 11:36:02.849954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.849961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.849964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.327 [2024-07-15 11:36:02.849977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.327 [2024-07-15 11:36:02.849984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.327 [2024-07-15 11:36:02.849991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.327 [2024-07-15 11:36:02.850000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.327 [2024-07-15 11:36:02.850269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.327 [2024-07-15 11:36:02.850275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.327 [2024-07-15 11:36:02.850279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.850292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.850306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.850315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.850495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.850501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.850504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.850517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.850531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.850540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.850753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.850759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.850762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.850775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.850782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.850789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.850799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.851059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.851065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.851068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.851081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.851095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.851105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.851299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.851306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.851309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.851322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.851336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.851346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.851548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.851554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.851558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.851571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.851585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.851594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.851769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.851775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.851779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.851791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.851799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.851805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.851815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.851987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.851993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.851996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.852000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.852009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.852013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.852017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16f7ec0) 00:24:34.328 [2024-07-15 11:36:02.852023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.328 [2024-07-15 11:36:02.852033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x177b2c0, cid 3, qid 0 00:24:34.328 [2024-07-15 11:36:02.856129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:34.328 [2024-07-15 11:36:02.856137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:34.328 [2024-07-15 11:36:02.856141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:34.328 [2024-07-15 11:36:02.856144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x177b2c0) on tqpair=0x16f7ec0 00:24:34.328 [2024-07-15 11:36:02.856152] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:34.328 0% 00:24:34.328 Data Units Read: 0 00:24:34.328 Data Units Written: 0 00:24:34.328 Host Read Commands: 0 00:24:34.328 Host Write Commands: 0 00:24:34.328 Controller Busy Time: 0 minutes 00:24:34.328 Power Cycles: 0 00:24:34.328 Power On Hours: 0 hours 00:24:34.328 Unsafe Shutdowns: 0 00:24:34.328 Unrecoverable Media Errors: 0 00:24:34.328 Lifetime Error Log Entries: 0 00:24:34.328 Warning Temperature Time: 0 minutes 00:24:34.328 Critical Temperature Time: 0 minutes 00:24:34.328 00:24:34.328 Number of Queues 00:24:34.328 ================ 00:24:34.328 Number of I/O Submission Queues: 127 00:24:34.328 Number of I/O Completion Queues: 127 00:24:34.328 00:24:34.328 Active Namespaces 00:24:34.328 ================= 00:24:34.328 Namespace ID:1 00:24:34.328 Error Recovery Timeout: Unlimited 00:24:34.328 Command Set Identifier: NVM (00h) 00:24:34.328 Deallocate: Supported 00:24:34.328 Deallocated/Unwritten Error: Not Supported 00:24:34.328 Deallocated Read Value: Unknown 00:24:34.328 Deallocate in Write Zeroes: Not Supported 00:24:34.328 Deallocated Guard Field: 0xFFFF 00:24:34.328 Flush: Supported 00:24:34.328 Reservation: Supported 00:24:34.328 Namespace Sharing Capabilities: Multiple Controllers 00:24:34.328 Size (in LBAs): 131072 (0GiB) 00:24:34.328 Capacity (in LBAs): 131072 (0GiB) 00:24:34.328 Utilization (in LBAs): 131072 (0GiB) 00:24:34.328 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:34.328 EUI64: ABCDEF0123456789 00:24:34.328 UUID: 8c08592c-7114-43ed-847a-cf6d494868ab 00:24:34.328 Thin Provisioning: Not Supported 00:24:34.328 Per-NS Atomic Units: Yes 00:24:34.328 Atomic Boundary Size (Normal): 0 00:24:34.328 Atomic Boundary Size (PFail): 0 00:24:34.328 Atomic Boundary Offset: 0 00:24:34.328 Maximum Single Source Range Length: 65535 00:24:34.328 Maximum Copy Length: 65535 00:24:34.328 Maximum Source Range Count: 1 00:24:34.328 NGUID/EUI64 Never Reused: No 00:24:34.328 Namespace Write Protected: No 00:24:34.328 Number of LBA Formats: 1 00:24:34.328 Current LBA Format: LBA Format #00 00:24:34.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:34.328 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.328 rmmod nvme_tcp 00:24:34.328 rmmod nvme_fabrics 00:24:34.328 rmmod nvme_keyring 00:24:34.328 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3644971 ']' 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3644971 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3644971 ']' 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3644971 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.329 11:36:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3644971 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3644971' 00:24:34.589 killing process with pid 3644971 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3644971 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3644971 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.589 11:36:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.133 11:36:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.133 00:24:37.133 real 0m11.050s 00:24:37.133 user 0m8.518s 00:24:37.133 sys 0m5.577s 00:24:37.133 11:36:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.133 11:36:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.133 ************************************ 00:24:37.133 END TEST nvmf_identify 00:24:37.133 ************************************ 00:24:37.133 11:36:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:37.133 11:36:05 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:37.133 11:36:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:37.133 11:36:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.133 11:36:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.133 ************************************ 00:24:37.133 START TEST nvmf_perf 00:24:37.133 ************************************ 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:37.133 * Looking for test storage... 00:24:37.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.133 11:36:05 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.134 11:36:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:43.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:43.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:43.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:43.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:43.717 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.718 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.978 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:43.978 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.978 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.978 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.978 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:43.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:24:43.978 00:24:43.978 --- 10.0.0.2 ping statistics --- 00:24:43.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.979 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:24:43.979 00:24:43.979 --- 10.0.0.1 ping statistics --- 00:24:43.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.979 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3650056 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3650056 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3650056 ']' 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:43.979 11:36:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.979 [2024-07-15 11:36:12.639686] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:24:43.979 [2024-07-15 11:36:12.639737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.979 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.240 [2024-07-15 11:36:12.705290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.240 [2024-07-15 11:36:12.770964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.240 [2024-07-15 11:36:12.770996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.240 [2024-07-15 11:36:12.771003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.240 [2024-07-15 11:36:12.771010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.240 [2024-07-15 11:36:12.771015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.240 [2024-07-15 11:36:12.771170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.240 [2024-07-15 11:36:12.771275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.240 [2024-07-15 11:36:12.771323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.240 [2024-07-15 11:36:12.771324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:44.810 11:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:45.382 11:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:45.382 11:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:45.643 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:45.904 [2024-07-15 11:36:14.425438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.904 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.164 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:46.164 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.164 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:46.164 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:46.425 11:36:14 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.425 [2024-07-15 11:36:15.091888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.425 11:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:46.686 11:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:46.686 11:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:46.686 11:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:46.686 11:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:48.071 Initializing NVMe Controllers 00:24:48.071 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:48.071 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:48.071 Initialization complete. Launching workers. 00:24:48.071 ======================================================== 00:24:48.071 Latency(us) 00:24:48.071 Device Information : IOPS MiB/s Average min max 00:24:48.071 PCIE (0000:65:00.0) NSID 1 from core 0: 80084.56 312.83 399.03 13.36 5212.75 00:24:48.071 ======================================================== 00:24:48.071 Total : 80084.56 312.83 399.03 13.36 5212.75 00:24:48.071 00:24:48.071 11:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.071 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.455 Initializing NVMe Controllers 00:24:49.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:49.455 Initialization complete. Launching workers. 00:24:49.455 ======================================================== 00:24:49.455 Latency(us) 00:24:49.455 Device Information : IOPS MiB/s Average min max 00:24:49.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12973.85 432.38 44525.21 00:24:49.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15236.22 7964.36 47892.18 00:24:49.455 ======================================================== 00:24:49.455 Total : 145.00 0.57 14003.62 432.38 47892.18 00:24:49.455 00:24:49.455 11:36:17 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:49.455 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.902 Initializing NVMe Controllers 00:24:50.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:50.902 Initialization complete. Launching workers. 00:24:50.902 ======================================================== 00:24:50.902 Latency(us) 00:24:50.902 Device Information : IOPS MiB/s Average min max 00:24:50.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10110.98 39.50 3165.33 499.40 8094.39 00:24:50.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3966.99 15.50 8110.13 6935.91 16015.90 00:24:50.902 ======================================================== 00:24:50.902 Total : 14077.97 54.99 4558.71 499.40 16015.90 00:24:50.902 00:24:50.902 11:36:19 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:50.902 11:36:19 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:50.902 11:36:19 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:50.902 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.445 Initializing NVMe Controllers 00:24:53.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.445 Controller IO queue size 128, less than required. 00:24:53.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.445 Controller IO queue size 128, less than required. 00:24:53.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:53.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:53.445 Initialization complete. Launching workers. 00:24:53.445 ======================================================== 00:24:53.445 Latency(us) 00:24:53.445 Device Information : IOPS MiB/s Average min max 00:24:53.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 943.03 235.76 140409.47 97336.16 207476.01 00:24:53.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.31 142.58 230770.57 77926.28 335370.22 00:24:53.445 ======================================================== 00:24:53.445 Total : 1513.34 378.34 174462.47 77926.28 335370.22 00:24:53.445 00:24:53.445 11:36:21 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:53.445 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.445 No valid NVMe controllers or AIO or URING devices found 00:24:53.445 Initializing NVMe Controllers 00:24:53.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:53.445 Controller IO queue size 128, less than required. 00:24:53.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.445 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:53.445 Controller IO queue size 128, less than required. 00:24:53.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:53.445 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:53.445 WARNING: Some requested NVMe devices were skipped 00:24:53.445 11:36:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:53.445 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.006 Initializing NVMe Controllers 00:24:56.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.006 Controller IO queue size 128, less than required. 00:24:56.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.006 Controller IO queue size 128, less than required. 00:24:56.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:56.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:56.006 Initialization complete. Launching workers. 00:24:56.006 00:24:56.006 ==================== 00:24:56.006 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:56.006 TCP transport: 00:24:56.006 polls: 41444 00:24:56.006 idle_polls: 17281 00:24:56.006 sock_completions: 24163 00:24:56.006 nvme_completions: 3961 00:24:56.006 submitted_requests: 6026 00:24:56.006 queued_requests: 1 00:24:56.006 00:24:56.006 ==================== 00:24:56.006 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:56.006 TCP transport: 00:24:56.006 polls: 39080 00:24:56.006 idle_polls: 16139 00:24:56.006 sock_completions: 22941 00:24:56.006 nvme_completions: 3725 00:24:56.006 submitted_requests: 5582 00:24:56.006 queued_requests: 1 00:24:56.006 ======================================================== 00:24:56.006 Latency(us) 00:24:56.006 Device Information : IOPS MiB/s Average min max 00:24:56.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 987.48 246.87 135948.95 78160.97 225823.08 00:24:56.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 928.63 232.16 140396.65 61479.12 229644.89 00:24:56.006 ======================================================== 00:24:56.006 Total : 1916.10 479.03 138104.50 61479.12 229644.89 00:24:56.006 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.006 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.006 rmmod nvme_tcp 00:24:56.006 rmmod nvme_fabrics 00:24:56.268 rmmod nvme_keyring 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3650056 ']' 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3650056 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3650056 ']' 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3650056 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3650056 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3650056' 00:24:56.268 killing process with pid 3650056 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3650056 00:24:56.268 11:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3650056 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.183 11:36:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.727 11:36:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:00.727 00:25:00.727 real 0m23.526s 00:25:00.727 user 0m57.538s 00:25:00.727 sys 0m7.684s 00:25:00.727 11:36:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.727 11:36:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:00.727 ************************************ 00:25:00.727 END TEST nvmf_perf 00:25:00.727 ************************************ 00:25:00.727 11:36:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:00.727 11:36:28 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:00.727 11:36:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:00.727 11:36:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.727 11:36:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.727 ************************************ 00:25:00.727 START TEST nvmf_fio_host 00:25:00.727 ************************************ 00:25:00.727 11:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:00.727 * Looking for test storage... 00:25:00.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.727 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:00.728 11:36:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:07.311 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:07.311 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:07.311 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:07.311 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.311 11:36:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.311 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:25:07.572 00:25:07.572 --- 10.0.0.2 ping statistics --- 00:25:07.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.572 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:25:07.572 00:25:07.572 --- 10.0.0.1 ping statistics --- 00:25:07.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.572 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.572 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3656860 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3656860 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3656860 ']' 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.573 11:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.573 [2024-07-15 11:36:36.267304] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:25:07.573 [2024-07-15 11:36:36.267387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.833 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.833 [2024-07-15 11:36:36.339030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.833 [2024-07-15 11:36:36.413257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.833 [2024-07-15 11:36:36.413293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.833 [2024-07-15 11:36:36.413300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.833 [2024-07-15 11:36:36.413309] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.833 [2024-07-15 11:36:36.413315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.833 [2024-07-15 11:36:36.413452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.833 [2024-07-15 11:36:36.413569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.833 [2024-07-15 11:36:36.413725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.833 [2024-07-15 11:36:36.413727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.403 11:36:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.403 11:36:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:08.403 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:08.664 [2024-07-15 11:36:37.180018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.664 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:08.664 11:36:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.664 11:36:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.664 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:08.924 Malloc1 00:25:08.924 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.924 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:09.186 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.447 [2024-07-15 11:36:37.909541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.447 11:36:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:09.447 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:09.732 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:09.732 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:09.732 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:09.732 11:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:09.992 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:09.992 fio-3.35 00:25:09.992 Starting 1 thread 00:25:09.992 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.536 00:25:12.536 test: (groupid=0, jobs=1): err= 0: pid=3657469: Mon Jul 15 11:36:40 2024 00:25:12.536 read: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(109MiB/2004msec) 00:25:12.536 slat (usec): min=2, max=258, avg= 2.17, stdev= 2.15 00:25:12.536 clat (usec): min=3003, max=9115, avg=5153.08, stdev=664.05 00:25:12.536 lat (usec): min=3005, max=9117, avg=5155.24, stdev=664.15 00:25:12.536 clat percentiles (usec): 00:25:12.536 | 1.00th=[ 3851], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4686], 00:25:12.536 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:12.536 | 70.00th=[ 5342], 80.00th=[ 5538], 90.00th=[ 5932], 95.00th=[ 6456], 00:25:12.536 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 8717], 99.95th=[ 8979], 00:25:12.536 | 99.99th=[ 9110] 00:25:12.536 bw ( KiB/s): min=54352, max=56680, per=99.95%, avg=55896.00, stdev=1046.93, samples=4 00:25:12.536 iops : min=13588, max=14170, avg=13974.00, stdev=261.73, samples=4 00:25:12.536 write: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(110MiB/2004msec); 0 zone resets 00:25:12.536 slat (usec): min=2, max=233, avg= 2.27, stdev= 1.54 00:25:12.536 clat (usec): min=2159, max=8289, avg=3940.55, stdev=451.65 00:25:12.536 lat (usec): min=2162, max=8291, avg=3942.82, stdev=451.74 00:25:12.536 clat percentiles (usec): 00:25:12.536 | 1.00th=[ 2671], 5.00th=[ 3097], 10.00th=[ 3359], 20.00th=[ 3621], 00:25:12.536 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4080], 00:25:12.536 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:25:12.536 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 6325], 99.95th=[ 6783], 00:25:12.536 | 99.99th=[ 7898] 00:25:12.536 bw ( KiB/s): min=54704, max=56504, per=99.99%, avg=55964.00, stdev=845.69, samples=4 00:25:12.536 iops : min=13676, max=14126, avg=13991.00, stdev=211.42, samples=4 00:25:12.536 lat (msec) : 4=26.72%, 10=73.28% 00:25:12.536 cpu : usr=68.55%, sys=26.16%, ctx=29, majf=0, minf=7 00:25:12.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:12.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:12.536 issued rwts: total=28017,28040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:12.536 00:25:12.536 Run status group 0 (all jobs): 00:25:12.536 READ: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=109MiB (115MB), run=2004-2004msec 00:25:12.536 WRITE: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=110MiB (115MB), run=2004-2004msec 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.536 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.537 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:12.537 11:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:12.537 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:12.537 fio-3.35 00:25:12.537 Starting 1 thread 00:25:12.797 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.383 00:25:15.383 test: (groupid=0, jobs=1): err= 0: pid=3658212: Mon Jul 15 11:36:43 2024 00:25:15.383 read: IOPS=8841, BW=138MiB/s (145MB/s)(277MiB/2007msec) 00:25:15.383 slat (usec): min=3, max=110, avg= 3.65, stdev= 1.63 00:25:15.383 clat (usec): min=1436, max=16917, avg=8857.58, stdev=2224.74 00:25:15.383 lat (usec): min=1440, max=16921, avg=8861.23, stdev=2224.91 00:25:15.383 clat percentiles (usec): 00:25:15.383 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6783], 00:25:15.384 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9372], 00:25:15.384 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11994], 95.00th=[12518], 00:25:15.384 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15401], 99.95th=[15664], 00:25:15.384 | 99.99th=[16057] 00:25:15.384 bw ( KiB/s): min=61376, max=81312, per=50.45%, avg=71368.00, stdev=8859.99, samples=4 00:25:15.384 iops : min= 3836, max= 5082, avg=4460.50, stdev=553.75, samples=4 00:25:15.384 write: IOPS=5291, BW=82.7MiB/s (86.7MB/s)(145MiB/1755msec); 0 zone resets 00:25:15.384 slat (usec): min=40, max=470, avg=41.11, stdev= 8.21 00:25:15.384 clat (usec): min=2777, max=16164, avg=9662.75, stdev=1594.14 00:25:15.384 lat (usec): min=2817, max=16208, avg=9703.85, stdev=1595.86 00:25:15.384 clat percentiles (usec): 00:25:15.384 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8225], 00:25:15.384 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:25:15.384 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11731], 95.00th=[12387], 00:25:15.384 | 99.00th=[13304], 99.50th=[13960], 99.90th=[15270], 99.95th=[15664], 00:25:15.384 | 99.99th=[16188] 00:25:15.384 bw ( KiB/s): min=63744, max=85024, per=87.75%, avg=74296.00, stdev=9386.69, samples=4 00:25:15.384 iops : min= 3984, max= 5314, avg=4643.50, stdev=586.67, samples=4 00:25:15.384 lat (msec) : 2=0.01%, 4=0.53%, 10=64.94%, 20=34.53% 00:25:15.384 cpu : usr=83.25%, sys=13.21%, ctx=12, majf=0, minf=18 00:25:15.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:15.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:15.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:15.384 issued rwts: total=17744,9287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:15.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:15.384 00:25:15.384 Run status group 0 (all jobs): 00:25:15.384 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=277MiB (291MB), run=2007-2007msec 00:25:15.384 WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=145MiB (152MB), run=1755-1755msec 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.384 rmmod nvme_tcp 00:25:15.384 rmmod nvme_fabrics 00:25:15.384 rmmod nvme_keyring 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3656860 ']' 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3656860 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3656860 ']' 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3656860 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3656860 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3656860' 00:25:15.384 killing process with pid 3656860 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3656860 00:25:15.384 11:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3656860 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.384 11:36:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.930 11:36:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:17.930 00:25:17.930 real 0m17.142s 00:25:17.930 user 1m9.018s 00:25:17.930 sys 0m7.175s 00:25:17.930 11:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.930 11:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.930 ************************************ 00:25:17.930 END TEST nvmf_fio_host 00:25:17.930 ************************************ 00:25:17.930 11:36:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:17.930 11:36:46 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:17.930 11:36:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:17.930 11:36:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.930 11:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:17.930 ************************************ 00:25:17.930 START TEST nvmf_failover 00:25:17.930 ************************************ 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:17.930 * Looking for test storage... 00:25:17.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:17.930 11:36:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:24.519 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:24.519 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:24.519 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:24.519 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.519 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:24.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:25:24.779 00:25:24.779 --- 10.0.0.2 ping statistics --- 00:25:24.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.779 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:25:24.779 00:25:24.779 --- 10.0.0.1 ping statistics --- 00:25:24.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.779 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:24.779 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3662859 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3662859 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3662859 ']' 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.039 11:36:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:25.039 [2024-07-15 11:36:53.566485] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:25:25.039 [2024-07-15 11:36:53.566551] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.039 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.039 [2024-07-15 11:36:53.652953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:25.299 [2024-07-15 11:36:53.746806] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.299 [2024-07-15 11:36:53.746859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.299 [2024-07-15 11:36:53.746867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.299 [2024-07-15 11:36:53.746875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.299 [2024-07-15 11:36:53.746881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.299 [2024-07-15 11:36:53.747022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.299 [2024-07-15 11:36:53.747190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:25.299 [2024-07-15 11:36:53.747223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:25.870 [2024-07-15 11:36:54.528779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.870 11:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:26.131 Malloc0 00:25:26.131 11:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.391 11:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.391 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.650 [2024-07-15 11:36:55.226621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.650 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:26.910 [2024-07-15 11:36:55.383026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:26.910 [2024-07-15 11:36:55.551510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3663229 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3663229 /var/tmp/bdevperf.sock 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3663229 ']' 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.910 11:36:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:27.860 11:36:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.860 11:36:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:27.860 11:36:56 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.120 NVMe0n1 00:25:28.121 11:36:56 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.381 00:25:28.381 11:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3663563 00:25:28.381 11:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:28.381 11:36:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:29.762 11:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.762 [2024-07-15 11:36:58.184111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.762 [2024-07-15 11:36:58.184151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.762 [2024-07-15 11:36:58.184157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 [2024-07-15 11:36:58.184369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f6c50 is same with the state(5) to be set 00:25:29.763 11:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:33.063 11:37:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.063 00:25:33.063 11:37:01 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.325 [2024-07-15 11:37:01.793913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.793998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.325 [2024-07-15 11:37:01.794217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 [2024-07-15 11:37:01.794453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8370 is same with the state(5) to be set 00:25:33.326 11:37:01 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:36.643 11:37:04 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.643 [2024-07-15 11:37:04.972914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.643 11:37:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:37.621 11:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:37.621 [2024-07-15 11:37:06.149413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.621 [2024-07-15 11:37:06.149720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.149999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.150003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.150007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 [2024-07-15 11:37:06.150011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f8a70 is same with the state(5) to be set 00:25:37.622 11:37:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3663563 00:25:44.214 0 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3663229 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3663229 ']' 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3663229 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3663229 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:44.214 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3663229' 00:25:44.214 killing process with pid 3663229 00:25:44.215 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3663229 00:25:44.215 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3663229 00:25:44.215 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:44.215 [2024-07-15 11:36:55.628628] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:25:44.215 [2024-07-15 11:36:55.628688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663229 ] 00:25:44.215 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.215 [2024-07-15 11:36:55.687583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.215 [2024-07-15 11:36:55.751842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.215 Running I/O for 15 seconds... 00:25:44.215 [2024-07-15 11:36:58.184851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.184903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.184921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.184938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.184955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.184989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.184996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.185990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.185999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.215 [2024-07-15 11:36:58.186007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-07-15 11:36:58.186016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.186991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.186999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:36:58.187015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:44.216 [2024-07-15 11:36:58.187046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:44.216 [2024-07-15 11:36:58.187053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:25:44.216 [2024-07-15 11:36:58.187061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187098] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e85300 was disconnected and freed. reset controller. 00:25:44.216 [2024-07-15 11:36:58.187108] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:44.216 [2024-07-15 11:36:58.187131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.216 [2024-07-15 11:36:58.187139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.216 [2024-07-15 11:36:58.187155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.216 [2024-07-15 11:36:58.187170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.216 [2024-07-15 11:36:58.187185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:36:58.187192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.216 [2024-07-15 11:36:58.190748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.216 [2024-07-15 11:36:58.190771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63ef0 (9): Bad file descriptor 00:25:44.216 [2024-07-15 11:36:58.223820] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:44.216 [2024-07-15 11:37:01.795284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:37:01.795319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:37:01.795334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:37:01.795348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:37:01.795358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.216 [2024-07-15 11:37:01.795365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.216 [2024-07-15 11:37:01.795375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.217 [2024-07-15 11:37:01.795829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.217 [2024-07-15 11:37:01.795838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.795987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.795994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.218 [2024-07-15 11:37:01.796633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.218 [2024-07-15 11:37:01.796640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:01.796753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.796985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.796994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.219 [2024-07-15 11:37:01.797410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:44.219 [2024-07-15 11:37:01.797437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:44.219 [2024-07-15 11:37:01.797445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37808 len:8 PRP1 0x0 PRP2 0x0 00:25:44.219 [2024-07-15 11:37:01.797453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797489] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e87270 was disconnected and freed. reset controller. 00:25:44.219 [2024-07-15 11:37:01.797499] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:44.219 [2024-07-15 11:37:01.797519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-07-15 11:37:01.797527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-07-15 11:37:01.797542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-07-15 11:37:01.797557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-07-15 11:37:01.797572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:01.797580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.219 [2024-07-15 11:37:01.797610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63ef0 (9): Bad file descriptor 00:25:44.219 [2024-07-15 11:37:01.801151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.219 [2024-07-15 11:37:01.833714] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:44.219 [2024-07-15 11:37:06.151400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-07-15 11:37:06.151724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.219 [2024-07-15 11:37:06.151731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.151990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.151997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.220 [2024-07-15 11:37:06.152771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.220 [2024-07-15 11:37:06.152782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.221 [2024-07-15 11:37:06.152789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.221 [2024-07-15 11:37:06.152805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.152992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.221 [2024-07-15 11:37:06.153520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:44.221 [2024-07-15 11:37:06.153550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:44.221 [2024-07-15 11:37:06.153556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50784 len:8 PRP1 0x0 PRP2 0x0 00:25:44.221 [2024-07-15 11:37:06.153564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153602] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e87f20 was disconnected and freed. reset controller. 00:25:44.221 [2024-07-15 11:37:06.153611] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:44.221 [2024-07-15 11:37:06.153631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.221 [2024-07-15 11:37:06.153639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.221 [2024-07-15 11:37:06.153655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.221 [2024-07-15 11:37:06.153672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.221 [2024-07-15 11:37:06.153687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.221 [2024-07-15 11:37:06.153695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.221 [2024-07-15 11:37:06.153725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63ef0 (9): Bad file descriptor 00:25:44.221 [2024-07-15 11:37:06.157269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.221 [2024-07-15 11:37:06.368524] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:44.221 00:25:44.221 Latency(us) 00:25:44.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.221 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:44.221 Verification LBA range: start 0x0 length 0x4000 00:25:44.221 NVMe0n1 : 15.01 11249.90 43.94 661.16 0.00 10718.69 1051.31 15182.51 00:25:44.221 =================================================================================================================== 00:25:44.221 Total : 11249.90 43.94 661.16 0.00 10718.69 1051.31 15182.51 00:25:44.221 Received shutdown signal, test time was about 15.000000 seconds 00:25:44.221 00:25:44.221 Latency(us) 00:25:44.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.221 =================================================================================================================== 00:25:44.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3666435 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3666435 /var/tmp/bdevperf.sock 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3666435 ']' 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:44.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.221 11:37:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 11:37:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.792 11:37:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:44.792 11:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:44.792 [2024-07-15 11:37:13.346238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.792 11:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:45.052 [2024-07-15 11:37:13.506600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:45.052 11:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.312 NVMe0n1 00:25:45.312 11:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.573 00:25:45.573 11:37:14 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:45.834 00:25:45.834 11:37:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:45.834 11:37:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.094 11:37:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:46.376 11:37:14 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:49.676 11:37:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:49.676 11:37:17 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:49.676 11:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3667596 00:25:49.676 11:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3667596 00:25:49.676 11:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.616 0 00:25:50.616 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.616 [2024-07-15 11:37:12.436865] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:25:50.616 [2024-07-15 11:37:12.436924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666435 ] 00:25:50.616 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.616 [2024-07-15 11:37:12.496034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.616 [2024-07-15 11:37:12.558942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.616 [2024-07-15 11:37:14.827762] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:50.616 [2024-07-15 11:37:14.827804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.616 [2024-07-15 11:37:14.827815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.616 [2024-07-15 11:37:14.827825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.616 [2024-07-15 11:37:14.827833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.616 [2024-07-15 11:37:14.827841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.616 [2024-07-15 11:37:14.827848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.616 [2024-07-15 11:37:14.827856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.616 [2024-07-15 11:37:14.827863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.616 [2024-07-15 11:37:14.827875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.616 [2024-07-15 11:37:14.827902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.616 [2024-07-15 11:37:14.827916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x857ef0 (9): Bad file descriptor 00:25:50.616 [2024-07-15 11:37:14.922349] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:50.616 Running I/O for 1 seconds... 00:25:50.616 00:25:50.616 Latency(us) 00:25:50.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.616 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:50.616 Verification LBA range: start 0x0 length 0x4000 00:25:50.616 NVMe0n1 : 1.00 11588.07 45.27 0.00 0.00 10981.98 2034.35 11687.25 00:25:50.616 =================================================================================================================== 00:25:50.616 Total : 11588.07 45.27 0.00 0.00 10981.98 2034.35 11687.25 00:25:50.616 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:50.616 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:50.616 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:50.877 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:50.877 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:51.138 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:51.138 11:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:54.440 11:37:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:54.440 11:37:22 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:54.440 11:37:22 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3666435 00:25:54.440 11:37:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3666435 ']' 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3666435 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3666435 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3666435' 00:25:54.440 killing process with pid 3666435 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3666435 00:25:54.440 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3666435 00:25:54.699 11:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.700 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.700 rmmod nvme_tcp 00:25:54.700 rmmod nvme_fabrics 00:25:54.959 rmmod nvme_keyring 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3662859 ']' 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3662859 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3662859 ']' 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3662859 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3662859 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3662859' 00:25:54.959 killing process with pid 3662859 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3662859 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3662859 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.959 11:37:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.504 11:37:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.504 00:25:57.504 real 0m39.513s 00:25:57.504 user 2m2.096s 00:25:57.504 sys 0m8.131s 00:25:57.504 11:37:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.504 11:37:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:57.504 ************************************ 00:25:57.504 END TEST nvmf_failover 00:25:57.504 ************************************ 00:25:57.504 11:37:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:57.504 11:37:25 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:57.504 11:37:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.504 11:37:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.504 11:37:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:57.504 ************************************ 00:25:57.504 START TEST nvmf_host_discovery 00:25:57.504 ************************************ 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:57.504 * Looking for test storage... 00:25:57.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.504 11:37:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:04.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:04.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:04.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:04.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.138 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.139 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.399 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.399 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.399 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.399 11:37:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.399 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.399 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.399 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:26:04.400 00:26:04.400 --- 10.0.0.2 ping statistics --- 00:26:04.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.400 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:26:04.400 00:26:04.400 --- 10.0.0.1 ping statistics --- 00:26:04.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.400 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.400 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.660 11:37:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:04.660 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.660 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.660 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3672617 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3672617 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3672617 ']' 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.661 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.661 [2024-07-15 11:37:33.167470] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:26:04.661 [2024-07-15 11:37:33.167525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.661 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.661 [2024-07-15 11:37:33.254349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.661 [2024-07-15 11:37:33.347847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.661 [2024-07-15 11:37:33.347900] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.661 [2024-07-15 11:37:33.347908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.661 [2024-07-15 11:37:33.347915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.661 [2024-07-15 11:37:33.347921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.661 [2024-07-15 11:37:33.347954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.605 11:37:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 [2024-07-15 11:37:34.004241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 [2024-07-15 11:37:34.016477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 null0 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 null1 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3672951 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3672951 /tmp/host.sock 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3672951 ']' 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:05.605 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.605 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.605 [2024-07-15 11:37:34.111666] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:26:05.605 [2024-07-15 11:37:34.111732] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672951 ] 00:26:05.605 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.605 [2024-07-15 11:37:34.175006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.605 [2024-07-15 11:37:34.249191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:06.546 11:37:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.546 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.547 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 [2024-07-15 11:37:35.251574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:06.808 11:37:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:07.379 [2024-07-15 11:37:35.923585] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:07.379 [2024-07-15 11:37:35.923606] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:07.380 [2024-07-15 11:37:35.923620] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:07.380 [2024-07-15 11:37:36.011907] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:07.640 [2024-07-15 11:37:36.239031] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:07.640 [2024-07-15 11:37:36.239054] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.901 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.163 [2024-07-15 11:37:36.811666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:08.163 [2024-07-15 11:37:36.812219] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:08.163 [2024-07-15 11:37:36.812243] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:08.163 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:08.164 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:08.424 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.425 [2024-07-15 11:37:36.941642] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:08.425 11:37:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:08.425 [2024-07-15 11:37:37.007343] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:08.425 [2024-07-15 11:37:37.007361] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:08.425 [2024-07-15 11:37:37.007366] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:09.365 11:37:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:09.365 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.625 [2024-07-15 11:37:38.095581] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:09.625 [2024-07-15 11:37:38.095603] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:09.625 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:09.625 [2024-07-15 11:37:38.100852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.626 [2024-07-15 11:37:38.100870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.626 [2024-07-15 11:37:38.100879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.626 [2024-07-15 11:37:38.100887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.626 [2024-07-15 11:37:38.100895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.626 [2024-07-15 11:37:38.100902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.626 [2024-07-15 11:37:38.100910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.626 [2024-07-15 11:37:38.100917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.626 [2024-07-15 11:37:38.100925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.626 [2024-07-15 11:37:38.110867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.120906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.121312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.121349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.121362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.121382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.121396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.121409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.121419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.121435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.626 [2024-07-15 11:37:38.130962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.131536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.131573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.131584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.131602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.131615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.131621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.131629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.131644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 [2024-07-15 11:37:38.141018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.141526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.141563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.141573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.141592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.141604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.141611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.141619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.141633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 [2024-07-15 11:37:38.151074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.151500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.151514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.151521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.151533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.151544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.151550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.151557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.151572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.626 [2024-07-15 11:37:38.161134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.161541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.161552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.161559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.161570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.161580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.161586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.161593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.161603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 [2024-07-15 11:37:38.171187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.171390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.171401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.171408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.171420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.171430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.171436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.171443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.171453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 [2024-07-15 11:37:38.181240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.626 [2024-07-15 11:37:38.181630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.626 [2024-07-15 11:37:38.181645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d9b0 with addr=10.0.0.2, port=4420 00:26:09.626 [2024-07-15 11:37:38.181652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d9b0 is same with the state(5) to be set 00:26:09.626 [2024-07-15 11:37:38.181663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d9b0 (9): Bad file descriptor 00:26:09.626 [2024-07-15 11:37:38.181672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:09.626 [2024-07-15 11:37:38.181679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:09.626 [2024-07-15 11:37:38.181685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:09.626 [2024-07-15 11:37:38.181695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.626 [2024-07-15 11:37:38.183027] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:09.626 [2024-07-15 11:37:38.183044] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.626 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.886 11:37:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.270 [2024-07-15 11:37:39.542333] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:11.270 [2024-07-15 11:37:39.542351] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:11.270 [2024-07-15 11:37:39.542363] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:11.270 [2024-07-15 11:37:39.628655] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:11.270 [2024-07-15 11:37:39.696493] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:11.270 [2024-07-15 11:37:39.696522] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.270 request: 00:26:11.270 { 00:26:11.270 "name": "nvme", 00:26:11.270 "trtype": "tcp", 00:26:11.270 "traddr": "10.0.0.2", 00:26:11.270 "adrfam": "ipv4", 00:26:11.270 "trsvcid": "8009", 00:26:11.270 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:11.270 "wait_for_attach": true, 00:26:11.270 "method": "bdev_nvme_start_discovery", 00:26:11.270 "req_id": 1 00:26:11.270 } 00:26:11.270 Got JSON-RPC error response 00:26:11.270 response: 00:26:11.270 { 00:26:11.270 "code": -17, 00:26:11.270 "message": "File exists" 00:26:11.270 } 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.270 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.271 request: 00:26:11.271 { 00:26:11.271 "name": "nvme_second", 00:26:11.271 "trtype": "tcp", 00:26:11.271 "traddr": "10.0.0.2", 00:26:11.271 "adrfam": "ipv4", 00:26:11.271 "trsvcid": "8009", 00:26:11.271 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:11.271 "wait_for_attach": true, 00:26:11.271 "method": "bdev_nvme_start_discovery", 00:26:11.271 "req_id": 1 00:26:11.271 } 00:26:11.271 Got JSON-RPC error response 00:26:11.271 response: 00:26:11.271 { 00:26:11.271 "code": -17, 00:26:11.271 "message": "File exists" 00:26:11.271 } 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.271 11:37:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.654 [2024-07-15 11:37:40.953285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.654 [2024-07-15 11:37:40.953319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738050 with addr=10.0.0.2, port=8010 00:26:12.654 [2024-07-15 11:37:40.953333] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:12.654 [2024-07-15 11:37:40.953346] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:12.654 [2024-07-15 11:37:40.953354] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:13.594 [2024-07-15 11:37:41.955637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.594 [2024-07-15 11:37:41.955660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738050 with addr=10.0.0.2, port=8010 00:26:13.594 [2024-07-15 11:37:41.955673] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:13.594 [2024-07-15 11:37:41.955680] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:13.594 [2024-07-15 11:37:41.955686] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:14.534 [2024-07-15 11:37:42.957532] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:14.534 request: 00:26:14.534 { 00:26:14.534 "name": "nvme_second", 00:26:14.534 "trtype": "tcp", 00:26:14.534 "traddr": "10.0.0.2", 00:26:14.534 "adrfam": "ipv4", 00:26:14.534 "trsvcid": "8010", 00:26:14.534 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:14.534 "wait_for_attach": false, 00:26:14.534 "attach_timeout_ms": 3000, 00:26:14.534 "method": "bdev_nvme_start_discovery", 00:26:14.534 "req_id": 1 00:26:14.534 } 00:26:14.534 Got JSON-RPC error response 00:26:14.534 response: 00:26:14.534 { 00:26:14.534 "code": -110, 00:26:14.534 "message": "Connection timed out" 00:26:14.534 } 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.534 11:37:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3672951 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.534 rmmod nvme_tcp 00:26:14.534 rmmod nvme_fabrics 00:26:14.534 rmmod nvme_keyring 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3672617 ']' 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3672617 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3672617 ']' 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3672617 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3672617 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3672617' 00:26:14.534 killing process with pid 3672617 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3672617 00:26:14.534 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3672617 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.794 11:37:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.726 11:37:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:16.726 00:26:16.726 real 0m19.575s 00:26:16.726 user 0m22.957s 00:26:16.726 sys 0m6.709s 00:26:16.726 11:37:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:16.726 11:37:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.726 ************************************ 00:26:16.726 END TEST nvmf_host_discovery 00:26:16.726 ************************************ 00:26:16.726 11:37:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:16.726 11:37:45 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:16.726 11:37:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:16.726 11:37:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.726 11:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:16.726 ************************************ 00:26:16.726 START TEST nvmf_host_multipath_status 00:26:16.726 ************************************ 00:26:16.726 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:16.986 * Looking for test storage... 00:26:16.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.986 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.987 11:37:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:23.664 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:23.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:23.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:23.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:23.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.665 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.926 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:24.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:26:24.186 00:26:24.186 --- 10.0.0.2 ping statistics --- 00:26:24.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.186 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:26:24.186 00:26:24.186 --- 10.0.0.1 ping statistics --- 00:26:24.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.186 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3678796 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3678796 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3678796 ']' 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.186 11:37:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:24.186 [2024-07-15 11:37:52.752138] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:26:24.186 [2024-07-15 11:37:52.752202] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.186 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.186 [2024-07-15 11:37:52.824279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:24.447 [2024-07-15 11:37:52.899536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.447 [2024-07-15 11:37:52.899574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.447 [2024-07-15 11:37:52.899582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.447 [2024-07-15 11:37:52.899588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.447 [2024-07-15 11:37:52.899593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.447 [2024-07-15 11:37:52.899749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.447 [2024-07-15 11:37:52.899750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3678796 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:25.017 [2024-07-15 11:37:53.700015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.017 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:25.277 Malloc0 00:26:25.277 11:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:25.537 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.537 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.797 [2024-07-15 11:37:54.335956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.797 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:25.797 [2024-07-15 11:37:54.488304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3679159 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3679159 /var/tmp/bdevperf.sock 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3679159 ']' 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.057 11:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:26.996 11:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.996 11:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:26.996 11:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:26.996 11:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:27.256 Nvme0n1 00:26:27.256 11:37:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:27.517 Nvme0n1 00:26:27.778 11:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:27.778 11:37:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:29.688 11:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:29.688 11:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:29.949 11:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:29.949 11:37:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:30.889 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:30.889 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:30.889 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.889 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.149 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.149 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:31.149 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.149 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.409 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.409 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.409 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.409 11:37:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.409 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.409 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.409 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.409 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.670 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.670 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.670 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.670 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:31.932 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.192 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:32.455 11:38:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:33.394 11:38:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:33.395 11:38:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:33.395 11:38:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.395 11:38:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.654 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.914 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.914 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.914 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.914 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.175 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.435 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.435 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:34.435 11:38:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:34.694 11:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:34.694 11:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:35.682 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:35.682 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.682 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.683 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.942 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.942 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:35.942 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.942 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:36.202 11:38:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.462 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.462 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:36.462 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.462 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:36.723 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.983 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:37.243 11:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.232 11:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.492 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.492 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.492 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.492 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.752 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:39.012 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.012 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:39.012 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.012 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.275 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.275 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:39.275 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:39.275 11:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:39.536 11:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:40.477 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:40.477 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:40.477 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.477 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.737 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.998 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.998 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.998 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.998 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.257 11:38:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.517 11:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.517 11:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:41.517 11:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:41.777 11:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.777 11:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:43.159 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.420 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.420 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:43.420 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.421 11:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.421 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.421 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:43.681 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.681 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.681 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.681 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.681 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.681 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.941 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.941 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:43.941 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:43.941 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:44.201 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:44.461 11:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:45.407 11:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:45.407 11:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.407 11:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.407 11:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.668 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.928 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.928 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.928 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.928 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.188 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.450 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.450 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:46.450 11:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.710 11:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:46.710 11:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:47.650 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:47.650 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:47.650 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.650 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.911 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.911 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:47.911 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.911 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.171 11:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.431 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.431 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.431 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.431 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:48.692 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.952 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.212 11:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:50.153 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:50.153 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:50.153 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.153 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.413 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.413 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.413 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.413 11:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.413 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.413 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.413 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.413 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.673 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.673 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.674 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.674 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.674 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.674 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.674 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.674 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.934 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.934 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:50.934 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.934 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.194 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.194 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:51.194 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.194 11:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:51.455 11:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:52.425 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:52.425 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:52.425 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.425 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.686 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.947 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.947 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.947 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.947 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.207 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.208 11:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3679159 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3679159 ']' 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3679159 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3679159 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3679159' 00:26:53.468 killing process with pid 3679159 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3679159 00:26:53.468 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3679159 00:26:53.468 Connection closed with partial response: 00:26:53.468 00:26:53.468 00:26:53.764 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3679159 00:26:53.764 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:53.765 [2024-07-15 11:37:54.551197] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:26:53.765 [2024-07-15 11:37:54.551255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679159 ] 00:26:53.765 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.765 [2024-07-15 11:37:54.600920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.765 [2024-07-15 11:37:54.652704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.765 Running I/O for 90 seconds... 00:26:53.765 [2024-07-15 11:38:07.885670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.885835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.885974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.885985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.885995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.765 [2024-07-15 11:38:07.886797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.886900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.886906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.765 [2024-07-15 11:38:07.887953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.765 [2024-07-15 11:38:07.887963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.887969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.887979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.887985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.887995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.766 [2024-07-15 11:38:07.888903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.888991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.888996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.889006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.889011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.889487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.889494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.889505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.889510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.889520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.766 [2024-07-15 11:38:07.889525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.766 [2024-07-15 11:38:07.889535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.889883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.889988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.889993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.767 [2024-07-15 11:38:07.890706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.767 [2024-07-15 11:38:07.890792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.767 [2024-07-15 11:38:07.890797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.892992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.892997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.768 [2024-07-15 11:38:07.902470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.768 [2024-07-15 11:38:07.902602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.768 [2024-07-15 11:38:07.902607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.902971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.902981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.902993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.902999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.769 [2024-07-15 11:38:07.903507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.769 [2024-07-15 11:38:07.903834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.769 [2024-07-15 11:38:07.903839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.903854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.903870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.903975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.903999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.904311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.904994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.770 [2024-07-15 11:38:07.905351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.770 [2024-07-15 11:38:07.905366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.770 [2024-07-15 11:38:07.905376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.905944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.905987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.905993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.912461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.912479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.912495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.912511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.912526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.912987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.912992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.913007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.771 [2024-07-15 11:38:07.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.771 [2024-07-15 11:38:07.913155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.771 [2024-07-15 11:38:07.913165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.772 [2024-07-15 11:38:07.913632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.772 [2024-07-15 11:38:07.913973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.772 [2024-07-15 11:38:07.913978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.913989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.913994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.914434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.914534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.914539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.773 [2024-07-15 11:38:07.915501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.773 [2024-07-15 11:38:07.915648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.773 [2024-07-15 11:38:07.915653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.915932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.915962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.915977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.915987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.915992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.916860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.916952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.916962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.774 [2024-07-15 11:38:07.921463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.921499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.921506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.921664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.921672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.921684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.921689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.921699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.921705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.921715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.921720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.774 [2024-07-15 11:38:07.921730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.774 [2024-07-15 11:38:07.921738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.921992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.775 [2024-07-15 11:38:07.922625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.775 [2024-07-15 11:38:07.922635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.775 [2024-07-15 11:38:07.922640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.922747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.922986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.922991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.923146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.923390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.923395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.776 [2024-07-15 11:38:07.924374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.924389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.924404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.776 [2024-07-15 11:38:07.924419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.776 [2024-07-15 11:38:07.924429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.924434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.924444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.924449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.924459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.924464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.924474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.924480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.925993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.925998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.777 [2024-07-15 11:38:07.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.926581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.926596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.926611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.926627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.926642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.777 [2024-07-15 11:38:07.926652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.777 [2024-07-15 11:38:07.926657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.926667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.926672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.926682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.926687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.926697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.926702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.926712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.926717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.778 [2024-07-15 11:38:07.927825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.927911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.927916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.778 [2024-07-15 11:38:07.928331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.778 [2024-07-15 11:38:07.928336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.779 [2024-07-15 11:38:07.928748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.928832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.928837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.779 [2024-07-15 11:38:07.930800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.779 [2024-07-15 11:38:07.930810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.930815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.930830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.930847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.930862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.930983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.930993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.930998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.931571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.931985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.931990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.780 [2024-07-15 11:38:07.932305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.780 [2024-07-15 11:38:07.932453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.780 [2024-07-15 11:38:07.932458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.932737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.932742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.781 [2024-07-15 11:38:07.933193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.933266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.933271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.934993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.934998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.781 [2024-07-15 11:38:07.935287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.781 [2024-07-15 11:38:07.935291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.935892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.935993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.935997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.782 [2024-07-15 11:38:07.936655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.782 [2024-07-15 11:38:07.936813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.782 [2024-07-15 11:38:07.936818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.936998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.783 [2024-07-15 11:38:07.937675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.937730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.937735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.783 [2024-07-15 11:38:07.939976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.783 [2024-07-15 11:38:07.939981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.939992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.939997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.940740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.940989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.940993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.784 [2024-07-15 11:38:07.941241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.784 [2024-07-15 11:38:07.941602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.784 [2024-07-15 11:38:07.941607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.941993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.941998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.942012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.785 [2024-07-15 11:38:07.942117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.942136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.942151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.942163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.942169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.943963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.943968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:53.785 [2024-07-15 11:38:07.944365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.785 [2024-07-15 11:38:07.944370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.944964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.944983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.944996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.786 [2024-07-15 11:38:07.945518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.786 [2024-07-15 11:38:07.945893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:53.786 [2024-07-15 11:38:07.945909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.945914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.945932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.945937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.945953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.945958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.945974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.945980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:07.946526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:07.946615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:07.946621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.993476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.993990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.994005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.787 [2024-07-15 11:38:19.994010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.994023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:53.787 [2024-07-15 11:38:19.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:53.787 [2024-07-15 11:38:19.994043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:53.787 Received shutdown signal, test time was about 25.711357 seconds 00:26:53.787 00:26:53.787 Latency(us) 00:26:53.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.787 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:53.787 Verification LBA range: start 0x0 length 0x4000 00:26:53.787 Nvme0n1 : 25.71 11112.82 43.41 0.00 0.00 11500.52 324.27 3075822.93 00:26:53.787 =================================================================================================================== 00:26:53.787 Total : 11112.82 43.41 0.00 0.00 11500.52 324.27 3075822.93 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:53.787 rmmod nvme_tcp 00:26:53.787 rmmod nvme_fabrics 00:26:53.787 rmmod nvme_keyring 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3678796 ']' 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3678796 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3678796 ']' 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3678796 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.787 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678796 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678796' 00:26:54.048 killing process with pid 3678796 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3678796 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3678796 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.048 11:38:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.593 11:38:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.593 00:26:56.593 real 0m39.263s 00:26:56.593 user 1m41.447s 00:26:56.593 sys 0m10.727s 00:26:56.593 11:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.593 11:38:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.593 ************************************ 00:26:56.593 END TEST nvmf_host_multipath_status 00:26:56.593 ************************************ 00:26:56.593 11:38:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:56.593 11:38:24 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:56.593 11:38:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:56.593 11:38:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.593 11:38:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.593 ************************************ 00:26:56.593 START TEST nvmf_discovery_remove_ifc 00:26:56.593 ************************************ 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:56.593 * Looking for test storage... 00:26:56.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.593 11:38:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:03.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:03.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:03.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:03.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.178 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.439 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.439 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.439 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.439 11:38:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:27:03.439 00:27:03.439 --- 10.0.0.2 ping statistics --- 00:27:03.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.439 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:27:03.439 00:27:03.439 --- 10.0.0.1 ping statistics --- 00:27:03.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.439 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3689024 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3689024 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3689024 ']' 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.439 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.699 [2024-07-15 11:38:32.162901] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:27:03.699 [2024-07-15 11:38:32.162963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.699 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.699 [2024-07-15 11:38:32.250324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.699 [2024-07-15 11:38:32.341928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.699 [2024-07-15 11:38:32.341984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.699 [2024-07-15 11:38:32.341991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.699 [2024-07-15 11:38:32.341998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.699 [2024-07-15 11:38:32.342004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.699 [2024-07-15 11:38:32.342029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.269 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.269 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:04.269 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.269 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.269 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.530 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.530 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:04.530 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.530 11:38:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.530 [2024-07-15 11:38:33.003422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.530 [2024-07-15 11:38:33.011615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:04.530 null0 00:27:04.530 [2024-07-15 11:38:33.043610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3689076 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3689076 /tmp/host.sock 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3689076 ']' 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:04.530 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.530 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.530 [2024-07-15 11:38:33.129409] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:27:04.530 [2024-07-15 11:38:33.129471] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3689076 ] 00:27:04.530 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.530 [2024-07-15 11:38:33.193774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.813 [2024-07-15 11:38:33.268614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.384 11:38:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.325 [2024-07-15 11:38:35.012328] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.325 [2024-07-15 11:38:35.012350] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.325 [2024-07-15 11:38:35.012364] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.586 [2024-07-15 11:38:35.141761] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:06.586 [2024-07-15 11:38:35.203557] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:06.586 [2024-07-15 11:38:35.203607] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:06.586 [2024-07-15 11:38:35.203630] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:06.586 [2024-07-15 11:38:35.203644] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:06.586 [2024-07-15 11:38:35.203664] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.586 [2024-07-15 11:38:35.211197] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd3e7b0 was disconnected and freed. delete nvme_qpair. 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:06.586 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:06.847 11:38:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.788 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.049 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.049 11:38:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.991 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.992 11:38:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.937 11:38:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.912 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.173 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.173 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:11.173 11:38:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:12.115 [2024-07-15 11:38:40.644267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:12.115 [2024-07-15 11:38:40.644320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.115 [2024-07-15 11:38:40.644332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.115 [2024-07-15 11:38:40.644342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.115 [2024-07-15 11:38:40.644350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.115 [2024-07-15 11:38:40.644358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.115 [2024-07-15 11:38:40.644365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.115 [2024-07-15 11:38:40.644372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.115 [2024-07-15 11:38:40.644379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.115 [2024-07-15 11:38:40.644388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.115 [2024-07-15 11:38:40.644395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.115 [2024-07-15 11:38:40.644402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05040 is same with the state(5) to be set 00:27:12.115 [2024-07-15 11:38:40.654286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05040 (9): Bad file descriptor 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:12.115 [2024-07-15 11:38:40.664328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.115 11:38:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:13.055 [2024-07-15 11:38:41.673147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:13.055 [2024-07-15 11:38:41.673186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05040 with addr=10.0.0.2, port=4420 00:27:13.055 [2024-07-15 11:38:41.673198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05040 is same with the state(5) to be set 00:27:13.055 [2024-07-15 11:38:41.673220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05040 (9): Bad file descriptor 00:27:13.055 [2024-07-15 11:38:41.673588] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:13.055 [2024-07-15 11:38:41.673606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.055 [2024-07-15 11:38:41.673613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.055 [2024-07-15 11:38:41.673621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.055 [2024-07-15 11:38:41.673636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.055 [2024-07-15 11:38:41.673649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:13.055 11:38:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.055 11:38:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:13.055 11:38:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:13.999 [2024-07-15 11:38:42.676025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:13.999 [2024-07-15 11:38:42.676044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:13.999 [2024-07-15 11:38:42.676052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:13.999 [2024-07-15 11:38:42.676059] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:13.999 [2024-07-15 11:38:42.676071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.999 [2024-07-15 11:38:42.676090] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:13.999 [2024-07-15 11:38:42.676111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.999 [2024-07-15 11:38:42.676120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.999 [2024-07-15 11:38:42.676133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.999 [2024-07-15 11:38:42.676140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.999 [2024-07-15 11:38:42.676148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.999 [2024-07-15 11:38:42.676155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.999 [2024-07-15 11:38:42.676163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.999 [2024-07-15 11:38:42.676170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.999 [2024-07-15 11:38:42.676178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.999 [2024-07-15 11:38:42.676185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.999 [2024-07-15 11:38:42.676192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:13.999 [2024-07-15 11:38:42.676520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd044c0 (9): Bad file descriptor 00:27:13.999 [2024-07-15 11:38:42.677532] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:13.999 [2024-07-15 11:38:42.677543] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.259 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:14.260 11:38:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:15.644 11:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.218 [2024-07-15 11:38:44.738335] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.218 [2024-07-15 11:38:44.738356] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.218 [2024-07-15 11:38:44.738370] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.218 [2024-07-15 11:38:44.866812] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:16.479 [2024-07-15 11:38:44.926928] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:16.479 [2024-07-15 11:38:44.926969] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:16.479 [2024-07-15 11:38:44.926989] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:16.479 [2024-07-15 11:38:44.927003] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:16.479 [2024-07-15 11:38:44.927012] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:16.479 [2024-07-15 11:38:44.935435] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd1b310 was disconnected and freed. delete nvme_qpair. 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.479 11:38:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3689076 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3689076 ']' 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3689076 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689076 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689076' 00:27:16.479 killing process with pid 3689076 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3689076 00:27:16.479 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3689076 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.740 rmmod nvme_tcp 00:27:16.740 rmmod nvme_fabrics 00:27:16.740 rmmod nvme_keyring 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3689024 ']' 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3689024 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3689024 ']' 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3689024 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689024 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689024' 00:27:16.740 killing process with pid 3689024 00:27:16.740 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3689024 00:27:16.741 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3689024 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.002 11:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.916 11:38:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.916 00:27:18.916 real 0m22.746s 00:27:18.916 user 0m26.917s 00:27:18.916 sys 0m6.666s 00:27:18.916 11:38:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:18.916 11:38:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.916 ************************************ 00:27:18.916 END TEST nvmf_discovery_remove_ifc 00:27:18.916 ************************************ 00:27:18.916 11:38:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:18.916 11:38:47 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:18.916 11:38:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:18.916 11:38:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.916 11:38:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.916 ************************************ 00:27:18.916 START TEST nvmf_identify_kernel_target 00:27:18.916 ************************************ 00:27:18.916 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.177 * Looking for test storage... 00:27:19.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.177 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.178 11:38:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:25.770 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:25.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:25.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:25.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:25.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:25.771 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.033 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.033 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.033 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.033 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.033 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.033 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:27:26.294 00:27:26.294 --- 10.0.0.2 ping statistics --- 00:27:26.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.294 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:27:26.294 00:27:26.294 --- 10.0.0.1 ping statistics --- 00:27:26.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.294 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.294 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:26.295 11:38:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:29.596 Waiting for block devices as requested 00:27:29.596 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:29.596 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:29.856 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:29.856 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:29.856 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:30.117 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:30.117 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:30.117 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:30.117 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:30.377 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:30.377 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.638 No valid GPT data, bailing 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:30.638 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:30.900 00:27:30.900 Discovery Log Number of Records 2, Generation counter 2 00:27:30.900 =====Discovery Log Entry 0====== 00:27:30.900 trtype: tcp 00:27:30.900 adrfam: ipv4 00:27:30.900 subtype: current discovery subsystem 00:27:30.900 treq: not specified, sq flow control disable supported 00:27:30.900 portid: 1 00:27:30.900 trsvcid: 4420 00:27:30.900 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.901 traddr: 10.0.0.1 00:27:30.901 eflags: none 00:27:30.901 sectype: none 00:27:30.901 =====Discovery Log Entry 1====== 00:27:30.901 trtype: tcp 00:27:30.901 adrfam: ipv4 00:27:30.901 subtype: nvme subsystem 00:27:30.901 treq: not specified, sq flow control disable supported 00:27:30.901 portid: 1 00:27:30.901 trsvcid: 4420 00:27:30.901 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:30.901 traddr: 10.0.0.1 00:27:30.901 eflags: none 00:27:30.901 sectype: none 00:27:30.901 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:30.901 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:30.901 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.901 ===================================================== 00:27:30.901 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:30.901 ===================================================== 00:27:30.901 Controller Capabilities/Features 00:27:30.901 ================================ 00:27:30.901 Vendor ID: 0000 00:27:30.901 Subsystem Vendor ID: 0000 00:27:30.901 Serial Number: 13754e59604c6b2033e9 00:27:30.901 Model Number: Linux 00:27:30.901 Firmware Version: 6.7.0-68 00:27:30.901 Recommended Arb Burst: 0 00:27:30.901 IEEE OUI Identifier: 00 00 00 00:27:30.901 Multi-path I/O 00:27:30.901 May have multiple subsystem ports: No 00:27:30.901 May have multiple controllers: No 00:27:30.901 Associated with SR-IOV VF: No 00:27:30.901 Max Data Transfer Size: Unlimited 00:27:30.901 Max Number of Namespaces: 0 00:27:30.901 Max Number of I/O Queues: 1024 00:27:30.901 NVMe Specification Version (VS): 1.3 00:27:30.901 NVMe Specification Version (Identify): 1.3 00:27:30.901 Maximum Queue Entries: 1024 00:27:30.901 Contiguous Queues Required: No 00:27:30.901 Arbitration Mechanisms Supported 00:27:30.901 Weighted Round Robin: Not Supported 00:27:30.901 Vendor Specific: Not Supported 00:27:30.901 Reset Timeout: 7500 ms 00:27:30.901 Doorbell Stride: 4 bytes 00:27:30.901 NVM Subsystem Reset: Not Supported 00:27:30.901 Command Sets Supported 00:27:30.901 NVM Command Set: Supported 00:27:30.901 Boot Partition: Not Supported 00:27:30.901 Memory Page Size Minimum: 4096 bytes 00:27:30.901 Memory Page Size Maximum: 4096 bytes 00:27:30.901 Persistent Memory Region: Not Supported 00:27:30.901 Optional Asynchronous Events Supported 00:27:30.901 Namespace Attribute Notices: Not Supported 00:27:30.901 Firmware Activation Notices: Not Supported 00:27:30.901 ANA Change Notices: Not Supported 00:27:30.901 PLE Aggregate Log Change Notices: Not Supported 00:27:30.901 LBA Status Info Alert Notices: Not Supported 00:27:30.901 EGE Aggregate Log Change Notices: Not Supported 00:27:30.901 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.901 Zone Descriptor Change Notices: Not Supported 00:27:30.901 Discovery Log Change Notices: Supported 00:27:30.901 Controller Attributes 00:27:30.901 128-bit Host Identifier: Not Supported 00:27:30.901 Non-Operational Permissive Mode: Not Supported 00:27:30.901 NVM Sets: Not Supported 00:27:30.901 Read Recovery Levels: Not Supported 00:27:30.901 Endurance Groups: Not Supported 00:27:30.901 Predictable Latency Mode: Not Supported 00:27:30.901 Traffic Based Keep ALive: Not Supported 00:27:30.901 Namespace Granularity: Not Supported 00:27:30.901 SQ Associations: Not Supported 00:27:30.901 UUID List: Not Supported 00:27:30.901 Multi-Domain Subsystem: Not Supported 00:27:30.901 Fixed Capacity Management: Not Supported 00:27:30.901 Variable Capacity Management: Not Supported 00:27:30.901 Delete Endurance Group: Not Supported 00:27:30.901 Delete NVM Set: Not Supported 00:27:30.901 Extended LBA Formats Supported: Not Supported 00:27:30.901 Flexible Data Placement Supported: Not Supported 00:27:30.901 00:27:30.901 Controller Memory Buffer Support 00:27:30.901 ================================ 00:27:30.901 Supported: No 00:27:30.901 00:27:30.901 Persistent Memory Region Support 00:27:30.901 ================================ 00:27:30.901 Supported: No 00:27:30.901 00:27:30.901 Admin Command Set Attributes 00:27:30.901 ============================ 00:27:30.901 Security Send/Receive: Not Supported 00:27:30.901 Format NVM: Not Supported 00:27:30.901 Firmware Activate/Download: Not Supported 00:27:30.901 Namespace Management: Not Supported 00:27:30.901 Device Self-Test: Not Supported 00:27:30.901 Directives: Not Supported 00:27:30.901 NVMe-MI: Not Supported 00:27:30.901 Virtualization Management: Not Supported 00:27:30.901 Doorbell Buffer Config: Not Supported 00:27:30.901 Get LBA Status Capability: Not Supported 00:27:30.901 Command & Feature Lockdown Capability: Not Supported 00:27:30.901 Abort Command Limit: 1 00:27:30.901 Async Event Request Limit: 1 00:27:30.901 Number of Firmware Slots: N/A 00:27:30.901 Firmware Slot 1 Read-Only: N/A 00:27:30.901 Firmware Activation Without Reset: N/A 00:27:30.901 Multiple Update Detection Support: N/A 00:27:30.901 Firmware Update Granularity: No Information Provided 00:27:30.901 Per-Namespace SMART Log: No 00:27:30.901 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.901 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:30.901 Command Effects Log Page: Not Supported 00:27:30.901 Get Log Page Extended Data: Supported 00:27:30.901 Telemetry Log Pages: Not Supported 00:27:30.901 Persistent Event Log Pages: Not Supported 00:27:30.901 Supported Log Pages Log Page: May Support 00:27:30.901 Commands Supported & Effects Log Page: Not Supported 00:27:30.901 Feature Identifiers & Effects Log Page:May Support 00:27:30.901 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.901 Data Area 4 for Telemetry Log: Not Supported 00:27:30.901 Error Log Page Entries Supported: 1 00:27:30.901 Keep Alive: Not Supported 00:27:30.901 00:27:30.901 NVM Command Set Attributes 00:27:30.901 ========================== 00:27:30.901 Submission Queue Entry Size 00:27:30.901 Max: 1 00:27:30.901 Min: 1 00:27:30.901 Completion Queue Entry Size 00:27:30.901 Max: 1 00:27:30.901 Min: 1 00:27:30.901 Number of Namespaces: 0 00:27:30.901 Compare Command: Not Supported 00:27:30.901 Write Uncorrectable Command: Not Supported 00:27:30.901 Dataset Management Command: Not Supported 00:27:30.901 Write Zeroes Command: Not Supported 00:27:30.901 Set Features Save Field: Not Supported 00:27:30.901 Reservations: Not Supported 00:27:30.901 Timestamp: Not Supported 00:27:30.901 Copy: Not Supported 00:27:30.901 Volatile Write Cache: Not Present 00:27:30.901 Atomic Write Unit (Normal): 1 00:27:30.901 Atomic Write Unit (PFail): 1 00:27:30.901 Atomic Compare & Write Unit: 1 00:27:30.901 Fused Compare & Write: Not Supported 00:27:30.901 Scatter-Gather List 00:27:30.901 SGL Command Set: Supported 00:27:30.901 SGL Keyed: Not Supported 00:27:30.901 SGL Bit Bucket Descriptor: Not Supported 00:27:30.901 SGL Metadata Pointer: Not Supported 00:27:30.901 Oversized SGL: Not Supported 00:27:30.901 SGL Metadata Address: Not Supported 00:27:30.901 SGL Offset: Supported 00:27:30.901 Transport SGL Data Block: Not Supported 00:27:30.901 Replay Protected Memory Block: Not Supported 00:27:30.901 00:27:30.901 Firmware Slot Information 00:27:30.901 ========================= 00:27:30.901 Active slot: 0 00:27:30.901 00:27:30.901 00:27:30.901 Error Log 00:27:30.901 ========= 00:27:30.901 00:27:30.901 Active Namespaces 00:27:30.901 ================= 00:27:30.901 Discovery Log Page 00:27:30.901 ================== 00:27:30.901 Generation Counter: 2 00:27:30.901 Number of Records: 2 00:27:30.901 Record Format: 0 00:27:30.901 00:27:30.901 Discovery Log Entry 0 00:27:30.901 ---------------------- 00:27:30.901 Transport Type: 3 (TCP) 00:27:30.901 Address Family: 1 (IPv4) 00:27:30.901 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:30.901 Entry Flags: 00:27:30.901 Duplicate Returned Information: 0 00:27:30.901 Explicit Persistent Connection Support for Discovery: 0 00:27:30.901 Transport Requirements: 00:27:30.901 Secure Channel: Not Specified 00:27:30.901 Port ID: 1 (0x0001) 00:27:30.901 Controller ID: 65535 (0xffff) 00:27:30.901 Admin Max SQ Size: 32 00:27:30.901 Transport Service Identifier: 4420 00:27:30.901 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:30.901 Transport Address: 10.0.0.1 00:27:30.901 Discovery Log Entry 1 00:27:30.901 ---------------------- 00:27:30.901 Transport Type: 3 (TCP) 00:27:30.901 Address Family: 1 (IPv4) 00:27:30.901 Subsystem Type: 2 (NVM Subsystem) 00:27:30.901 Entry Flags: 00:27:30.901 Duplicate Returned Information: 0 00:27:30.901 Explicit Persistent Connection Support for Discovery: 0 00:27:30.901 Transport Requirements: 00:27:30.901 Secure Channel: Not Specified 00:27:30.901 Port ID: 1 (0x0001) 00:27:30.901 Controller ID: 65535 (0xffff) 00:27:30.901 Admin Max SQ Size: 32 00:27:30.901 Transport Service Identifier: 4420 00:27:30.901 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:30.901 Transport Address: 10.0.0.1 00:27:30.901 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:30.902 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.902 get_feature(0x01) failed 00:27:30.902 get_feature(0x02) failed 00:27:30.902 get_feature(0x04) failed 00:27:30.902 ===================================================== 00:27:30.902 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:30.902 ===================================================== 00:27:30.902 Controller Capabilities/Features 00:27:30.902 ================================ 00:27:30.902 Vendor ID: 0000 00:27:30.902 Subsystem Vendor ID: 0000 00:27:30.902 Serial Number: e37443bbe408b6e92fd4 00:27:30.902 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:30.902 Firmware Version: 6.7.0-68 00:27:30.902 Recommended Arb Burst: 6 00:27:30.902 IEEE OUI Identifier: 00 00 00 00:27:30.902 Multi-path I/O 00:27:30.902 May have multiple subsystem ports: Yes 00:27:30.902 May have multiple controllers: Yes 00:27:30.902 Associated with SR-IOV VF: No 00:27:30.902 Max Data Transfer Size: Unlimited 00:27:30.902 Max Number of Namespaces: 1024 00:27:30.902 Max Number of I/O Queues: 128 00:27:30.902 NVMe Specification Version (VS): 1.3 00:27:30.902 NVMe Specification Version (Identify): 1.3 00:27:30.902 Maximum Queue Entries: 1024 00:27:30.902 Contiguous Queues Required: No 00:27:30.902 Arbitration Mechanisms Supported 00:27:30.902 Weighted Round Robin: Not Supported 00:27:30.902 Vendor Specific: Not Supported 00:27:30.902 Reset Timeout: 7500 ms 00:27:30.902 Doorbell Stride: 4 bytes 00:27:30.902 NVM Subsystem Reset: Not Supported 00:27:30.902 Command Sets Supported 00:27:30.902 NVM Command Set: Supported 00:27:30.902 Boot Partition: Not Supported 00:27:30.902 Memory Page Size Minimum: 4096 bytes 00:27:30.902 Memory Page Size Maximum: 4096 bytes 00:27:30.902 Persistent Memory Region: Not Supported 00:27:30.902 Optional Asynchronous Events Supported 00:27:30.902 Namespace Attribute Notices: Supported 00:27:30.902 Firmware Activation Notices: Not Supported 00:27:30.902 ANA Change Notices: Supported 00:27:30.902 PLE Aggregate Log Change Notices: Not Supported 00:27:30.902 LBA Status Info Alert Notices: Not Supported 00:27:30.902 EGE Aggregate Log Change Notices: Not Supported 00:27:30.902 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.902 Zone Descriptor Change Notices: Not Supported 00:27:30.902 Discovery Log Change Notices: Not Supported 00:27:30.902 Controller Attributes 00:27:30.902 128-bit Host Identifier: Supported 00:27:30.902 Non-Operational Permissive Mode: Not Supported 00:27:30.902 NVM Sets: Not Supported 00:27:30.902 Read Recovery Levels: Not Supported 00:27:30.902 Endurance Groups: Not Supported 00:27:30.902 Predictable Latency Mode: Not Supported 00:27:30.902 Traffic Based Keep ALive: Supported 00:27:30.902 Namespace Granularity: Not Supported 00:27:30.902 SQ Associations: Not Supported 00:27:30.902 UUID List: Not Supported 00:27:30.902 Multi-Domain Subsystem: Not Supported 00:27:30.902 Fixed Capacity Management: Not Supported 00:27:30.902 Variable Capacity Management: Not Supported 00:27:30.902 Delete Endurance Group: Not Supported 00:27:30.902 Delete NVM Set: Not Supported 00:27:30.902 Extended LBA Formats Supported: Not Supported 00:27:30.902 Flexible Data Placement Supported: Not Supported 00:27:30.902 00:27:30.902 Controller Memory Buffer Support 00:27:30.902 ================================ 00:27:30.902 Supported: No 00:27:30.902 00:27:30.902 Persistent Memory Region Support 00:27:30.902 ================================ 00:27:30.902 Supported: No 00:27:30.902 00:27:30.902 Admin Command Set Attributes 00:27:30.902 ============================ 00:27:30.902 Security Send/Receive: Not Supported 00:27:30.902 Format NVM: Not Supported 00:27:30.902 Firmware Activate/Download: Not Supported 00:27:30.902 Namespace Management: Not Supported 00:27:30.902 Device Self-Test: Not Supported 00:27:30.902 Directives: Not Supported 00:27:30.902 NVMe-MI: Not Supported 00:27:30.902 Virtualization Management: Not Supported 00:27:30.902 Doorbell Buffer Config: Not Supported 00:27:30.902 Get LBA Status Capability: Not Supported 00:27:30.902 Command & Feature Lockdown Capability: Not Supported 00:27:30.902 Abort Command Limit: 4 00:27:30.902 Async Event Request Limit: 4 00:27:30.902 Number of Firmware Slots: N/A 00:27:30.902 Firmware Slot 1 Read-Only: N/A 00:27:30.902 Firmware Activation Without Reset: N/A 00:27:30.902 Multiple Update Detection Support: N/A 00:27:30.902 Firmware Update Granularity: No Information Provided 00:27:30.902 Per-Namespace SMART Log: Yes 00:27:30.902 Asymmetric Namespace Access Log Page: Supported 00:27:30.902 ANA Transition Time : 10 sec 00:27:30.902 00:27:30.902 Asymmetric Namespace Access Capabilities 00:27:30.902 ANA Optimized State : Supported 00:27:30.902 ANA Non-Optimized State : Supported 00:27:30.902 ANA Inaccessible State : Supported 00:27:30.902 ANA Persistent Loss State : Supported 00:27:30.902 ANA Change State : Supported 00:27:30.902 ANAGRPID is not changed : No 00:27:30.902 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:30.902 00:27:30.902 ANA Group Identifier Maximum : 128 00:27:30.902 Number of ANA Group Identifiers : 128 00:27:30.902 Max Number of Allowed Namespaces : 1024 00:27:30.902 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:30.902 Command Effects Log Page: Supported 00:27:30.902 Get Log Page Extended Data: Supported 00:27:30.902 Telemetry Log Pages: Not Supported 00:27:30.902 Persistent Event Log Pages: Not Supported 00:27:30.902 Supported Log Pages Log Page: May Support 00:27:30.902 Commands Supported & Effects Log Page: Not Supported 00:27:30.902 Feature Identifiers & Effects Log Page:May Support 00:27:30.902 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.902 Data Area 4 for Telemetry Log: Not Supported 00:27:30.902 Error Log Page Entries Supported: 128 00:27:30.902 Keep Alive: Supported 00:27:30.902 Keep Alive Granularity: 1000 ms 00:27:30.902 00:27:30.902 NVM Command Set Attributes 00:27:30.902 ========================== 00:27:30.902 Submission Queue Entry Size 00:27:30.902 Max: 64 00:27:30.902 Min: 64 00:27:30.902 Completion Queue Entry Size 00:27:30.902 Max: 16 00:27:30.902 Min: 16 00:27:30.902 Number of Namespaces: 1024 00:27:30.902 Compare Command: Not Supported 00:27:30.902 Write Uncorrectable Command: Not Supported 00:27:30.902 Dataset Management Command: Supported 00:27:30.902 Write Zeroes Command: Supported 00:27:30.902 Set Features Save Field: Not Supported 00:27:30.902 Reservations: Not Supported 00:27:30.902 Timestamp: Not Supported 00:27:30.902 Copy: Not Supported 00:27:30.902 Volatile Write Cache: Present 00:27:30.902 Atomic Write Unit (Normal): 1 00:27:30.902 Atomic Write Unit (PFail): 1 00:27:30.902 Atomic Compare & Write Unit: 1 00:27:30.902 Fused Compare & Write: Not Supported 00:27:30.902 Scatter-Gather List 00:27:30.902 SGL Command Set: Supported 00:27:30.902 SGL Keyed: Not Supported 00:27:30.902 SGL Bit Bucket Descriptor: Not Supported 00:27:30.902 SGL Metadata Pointer: Not Supported 00:27:30.902 Oversized SGL: Not Supported 00:27:30.902 SGL Metadata Address: Not Supported 00:27:30.902 SGL Offset: Supported 00:27:30.902 Transport SGL Data Block: Not Supported 00:27:30.902 Replay Protected Memory Block: Not Supported 00:27:30.902 00:27:30.902 Firmware Slot Information 00:27:30.902 ========================= 00:27:30.902 Active slot: 0 00:27:30.902 00:27:30.902 Asymmetric Namespace Access 00:27:30.902 =========================== 00:27:30.902 Change Count : 0 00:27:30.902 Number of ANA Group Descriptors : 1 00:27:30.902 ANA Group Descriptor : 0 00:27:30.902 ANA Group ID : 1 00:27:30.902 Number of NSID Values : 1 00:27:30.902 Change Count : 0 00:27:30.902 ANA State : 1 00:27:30.902 Namespace Identifier : 1 00:27:30.902 00:27:30.902 Commands Supported and Effects 00:27:30.902 ============================== 00:27:30.902 Admin Commands 00:27:30.902 -------------- 00:27:30.902 Get Log Page (02h): Supported 00:27:30.902 Identify (06h): Supported 00:27:30.902 Abort (08h): Supported 00:27:30.902 Set Features (09h): Supported 00:27:30.902 Get Features (0Ah): Supported 00:27:30.902 Asynchronous Event Request (0Ch): Supported 00:27:30.902 Keep Alive (18h): Supported 00:27:30.902 I/O Commands 00:27:30.902 ------------ 00:27:30.902 Flush (00h): Supported 00:27:30.902 Write (01h): Supported LBA-Change 00:27:30.902 Read (02h): Supported 00:27:30.902 Write Zeroes (08h): Supported LBA-Change 00:27:30.902 Dataset Management (09h): Supported 00:27:30.902 00:27:30.902 Error Log 00:27:30.902 ========= 00:27:30.902 Entry: 0 00:27:30.902 Error Count: 0x3 00:27:30.902 Submission Queue Id: 0x0 00:27:30.902 Command Id: 0x5 00:27:30.902 Phase Bit: 0 00:27:30.902 Status Code: 0x2 00:27:30.902 Status Code Type: 0x0 00:27:30.902 Do Not Retry: 1 00:27:30.902 Error Location: 0x28 00:27:30.902 LBA: 0x0 00:27:30.902 Namespace: 0x0 00:27:30.902 Vendor Log Page: 0x0 00:27:30.902 ----------- 00:27:30.902 Entry: 1 00:27:30.902 Error Count: 0x2 00:27:30.902 Submission Queue Id: 0x0 00:27:30.902 Command Id: 0x5 00:27:30.902 Phase Bit: 0 00:27:30.902 Status Code: 0x2 00:27:30.902 Status Code Type: 0x0 00:27:30.902 Do Not Retry: 1 00:27:30.902 Error Location: 0x28 00:27:30.902 LBA: 0x0 00:27:30.903 Namespace: 0x0 00:27:30.903 Vendor Log Page: 0x0 00:27:30.903 ----------- 00:27:30.903 Entry: 2 00:27:30.903 Error Count: 0x1 00:27:30.903 Submission Queue Id: 0x0 00:27:30.903 Command Id: 0x4 00:27:30.903 Phase Bit: 0 00:27:30.903 Status Code: 0x2 00:27:30.903 Status Code Type: 0x0 00:27:30.903 Do Not Retry: 1 00:27:30.903 Error Location: 0x28 00:27:30.903 LBA: 0x0 00:27:30.903 Namespace: 0x0 00:27:30.903 Vendor Log Page: 0x0 00:27:30.903 00:27:30.903 Number of Queues 00:27:30.903 ================ 00:27:30.903 Number of I/O Submission Queues: 128 00:27:30.903 Number of I/O Completion Queues: 128 00:27:30.903 00:27:30.903 ZNS Specific Controller Data 00:27:30.903 ============================ 00:27:30.903 Zone Append Size Limit: 0 00:27:30.903 00:27:30.903 00:27:30.903 Active Namespaces 00:27:30.903 ================= 00:27:30.903 get_feature(0x05) failed 00:27:30.903 Namespace ID:1 00:27:30.903 Command Set Identifier: NVM (00h) 00:27:30.903 Deallocate: Supported 00:27:30.903 Deallocated/Unwritten Error: Not Supported 00:27:30.903 Deallocated Read Value: Unknown 00:27:30.903 Deallocate in Write Zeroes: Not Supported 00:27:30.903 Deallocated Guard Field: 0xFFFF 00:27:30.903 Flush: Supported 00:27:30.903 Reservation: Not Supported 00:27:30.903 Namespace Sharing Capabilities: Multiple Controllers 00:27:30.903 Size (in LBAs): 3750748848 (1788GiB) 00:27:30.903 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:30.903 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:30.903 UUID: da20e085-c5b8-454b-b17d-d239f717d0b3 00:27:30.903 Thin Provisioning: Not Supported 00:27:30.903 Per-NS Atomic Units: Yes 00:27:30.903 Atomic Write Unit (Normal): 8 00:27:30.903 Atomic Write Unit (PFail): 8 00:27:30.903 Preferred Write Granularity: 8 00:27:30.903 Atomic Compare & Write Unit: 8 00:27:30.903 Atomic Boundary Size (Normal): 0 00:27:30.903 Atomic Boundary Size (PFail): 0 00:27:30.903 Atomic Boundary Offset: 0 00:27:30.903 NGUID/EUI64 Never Reused: No 00:27:30.903 ANA group ID: 1 00:27:30.903 Namespace Write Protected: No 00:27:30.903 Number of LBA Formats: 1 00:27:30.903 Current LBA Format: LBA Format #00 00:27:30.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.903 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.903 rmmod nvme_tcp 00:27:30.903 rmmod nvme_fabrics 00:27:30.903 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.165 11:38:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:33.116 11:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:36.419 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:36.419 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:36.681 00:27:36.681 real 0m17.638s 00:27:36.681 user 0m4.437s 00:27:36.681 sys 0m10.116s 00:27:36.681 11:39:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.681 11:39:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.681 ************************************ 00:27:36.681 END TEST nvmf_identify_kernel_target 00:27:36.681 ************************************ 00:27:36.681 11:39:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:36.681 11:39:05 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:36.681 11:39:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:36.681 11:39:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.681 11:39:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.681 ************************************ 00:27:36.681 START TEST nvmf_auth_host 00:27:36.681 ************************************ 00:27:36.681 11:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:36.943 * Looking for test storage... 00:27:36.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:36.943 11:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:45.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:45.091 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:45.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:45.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:27:45.091 00:27:45.091 --- 10.0.0.2 ping statistics --- 00:27:45.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.091 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:27:45.091 00:27:45.091 --- 10.0.0.1 ping statistics --- 00:27:45.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.091 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3703686 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3703686 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3703686 ']' 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.091 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.092 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.092 11:39:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0f03aa53591b716d9260e018356ca54e 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PPV 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0f03aa53591b716d9260e018356ca54e 0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0f03aa53591b716d9260e018356ca54e 0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0f03aa53591b716d9260e018356ca54e 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PPV 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PPV 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PPV 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dfd8dcc3ab03ee85ceab202038c44f308c12da346563a4e5fc270badc9eb5a8a 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uzE 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dfd8dcc3ab03ee85ceab202038c44f308c12da346563a4e5fc270badc9eb5a8a 3 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dfd8dcc3ab03ee85ceab202038c44f308c12da346563a4e5fc270badc9eb5a8a 3 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dfd8dcc3ab03ee85ceab202038c44f308c12da346563a4e5fc270badc9eb5a8a 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uzE 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uzE 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uzE 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ebe5fce2ae0a70243a38635f39d55959bdd7d05986b73a97 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CMF 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ebe5fce2ae0a70243a38635f39d55959bdd7d05986b73a97 0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ebe5fce2ae0a70243a38635f39d55959bdd7d05986b73a97 0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ebe5fce2ae0a70243a38635f39d55959bdd7d05986b73a97 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CMF 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CMF 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.CMF 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=292419adac9fd2f45c18a3c326f43031927b7ccb5c4e9aab 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Kro 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 292419adac9fd2f45c18a3c326f43031927b7ccb5c4e9aab 2 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 292419adac9fd2f45c18a3c326f43031927b7ccb5c4e9aab 2 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=292419adac9fd2f45c18a3c326f43031927b7ccb5c4e9aab 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Kro 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Kro 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Kro 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=429355c1871e974e9982421e56cbc8e2 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YUY 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 429355c1871e974e9982421e56cbc8e2 1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 429355c1871e974e9982421e56cbc8e2 1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=429355c1871e974e9982421e56cbc8e2 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:45.092 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YUY 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YUY 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YUY 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7fb150a565a489c26fa7d396071f851c 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hOt 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7fb150a565a489c26fa7d396071f851c 1 00:27:45.353 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7fb150a565a489c26fa7d396071f851c 1 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7fb150a565a489c26fa7d396071f851c 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hOt 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hOt 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hOt 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=46cc61cf0ad30613e969d613fa1ae5c4a0a09eb2c73127ea 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZRp 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 46cc61cf0ad30613e969d613fa1ae5c4a0a09eb2c73127ea 2 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 46cc61cf0ad30613e969d613fa1ae5c4a0a09eb2c73127ea 2 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=46cc61cf0ad30613e969d613fa1ae5c4a0a09eb2c73127ea 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZRp 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZRp 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZRp 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4fa310e3b8e67fb2012e54b01b2f9c99 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.p7D 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4fa310e3b8e67fb2012e54b01b2f9c99 0 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4fa310e3b8e67fb2012e54b01b2f9c99 0 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4fa310e3b8e67fb2012e54b01b2f9c99 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.p7D 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.p7D 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.p7D 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1455dd70d3d6d1509adb0f8ae6c4eec657acea5734f86cbac50190d306d30f91 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.h22 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1455dd70d3d6d1509adb0f8ae6c4eec657acea5734f86cbac50190d306d30f91 3 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1455dd70d3d6d1509adb0f8ae6c4eec657acea5734f86cbac50190d306d30f91 3 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1455dd70d3d6d1509adb0f8ae6c4eec657acea5734f86cbac50190d306d30f91 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:45.354 11:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.h22 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.h22 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.h22 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3703686 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3703686 ']' 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.354 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PPV 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uzE ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uzE 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CMF 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Kro ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kro 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YUY 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hOt ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hOt 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZRp 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.p7D ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.p7D 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.h22 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:45.615 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:45.616 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:45.616 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:45.616 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:45.875 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:45.875 11:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:49.177 Waiting for block devices as requested 00:27:49.177 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:49.177 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:49.177 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:49.177 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:49.177 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:49.177 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:49.437 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:49.437 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:49.437 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:49.697 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:49.697 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:49.958 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:49.958 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:49.958 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:49.958 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:50.218 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:50.218 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:51.157 No valid GPT data, bailing 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:51.157 00:27:51.157 Discovery Log Number of Records 2, Generation counter 2 00:27:51.157 =====Discovery Log Entry 0====== 00:27:51.157 trtype: tcp 00:27:51.157 adrfam: ipv4 00:27:51.157 subtype: current discovery subsystem 00:27:51.157 treq: not specified, sq flow control disable supported 00:27:51.157 portid: 1 00:27:51.157 trsvcid: 4420 00:27:51.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:51.157 traddr: 10.0.0.1 00:27:51.157 eflags: none 00:27:51.157 sectype: none 00:27:51.157 =====Discovery Log Entry 1====== 00:27:51.157 trtype: tcp 00:27:51.157 adrfam: ipv4 00:27:51.157 subtype: nvme subsystem 00:27:51.157 treq: not specified, sq flow control disable supported 00:27:51.157 portid: 1 00:27:51.157 trsvcid: 4420 00:27:51.157 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:51.157 traddr: 10.0.0.1 00:27:51.157 eflags: none 00:27:51.157 sectype: none 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.157 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.417 nvme0n1 00:27:51.417 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.417 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.417 11:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.417 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.417 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.417 11:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.417 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.677 nvme0n1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.677 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 nvme0n1 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.937 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.938 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.199 nvme0n1 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.199 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.460 nvme0n1 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.460 11:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.460 nvme0n1 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.460 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.720 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.721 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.721 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.721 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.721 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.721 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.721 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.981 nvme0n1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.981 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 nvme0n1 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.242 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.243 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.504 nvme0n1 00:27:53.504 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.504 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.504 11:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.504 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.504 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.504 11:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.504 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.505 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 nvme0n1 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.766 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.028 nvme0n1 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.028 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.304 nvme0n1 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.304 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.305 11:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.594 nvme0n1 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.594 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.855 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.118 nvme0n1 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.118 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.379 nvme0n1 00:27:55.379 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.379 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.379 11:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.379 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.379 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.379 11:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.379 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.640 nvme0n1 00:27:55.640 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.640 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.640 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.640 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.640 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.640 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.900 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.900 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.900 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.900 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.900 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.901 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.472 nvme0n1 00:27:56.472 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.473 11:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.734 nvme0n1 00:27:56.734 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.734 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.734 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.734 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.734 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.734 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.996 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.569 nvme0n1 00:27:57.569 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.569 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.569 11:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.569 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.569 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.569 11:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.569 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.831 nvme0n1 00:27:57.831 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.831 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.831 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.831 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.831 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.831 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.093 11:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.665 nvme0n1 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.665 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.237 nvme0n1 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.237 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.499 11:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.071 nvme0n1 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.071 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.332 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.333 11:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.905 nvme0n1 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.905 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.166 11:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.738 nvme0n1 00:28:01.738 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.739 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.999 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.999 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.999 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:01.999 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.999 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.000 11:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.571 nvme0n1 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.571 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.832 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.833 nvme0n1 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.833 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.094 nvme0n1 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.094 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.095 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 nvme0n1 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.357 11:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.357 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.619 nvme0n1 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.619 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.881 nvme0n1 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.881 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.143 nvme0n1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.143 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.404 nvme0n1 00:28:04.404 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.404 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.404 11:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.404 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.404 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.404 11:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.404 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.405 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.405 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.405 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 nvme0n1 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.668 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 nvme0n1 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.930 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.191 nvme0n1 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.191 11:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 nvme0n1 00:28:05.451 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.451 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.451 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.712 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.973 nvme0n1 00:28:05.973 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.974 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 nvme0n1 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.235 11:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.496 nvme0n1 00:28:06.496 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.496 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.496 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.496 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.496 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.757 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.018 nvme0n1 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.018 11:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.590 nvme0n1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.590 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.162 nvme0n1 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.162 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.163 11:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.879 nvme0n1 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.879 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.178 nvme0n1 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.178 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.179 11:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.751 nvme0n1 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.751 11:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.695 nvme0n1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.695 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.266 nvme0n1 00:28:11.266 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.527 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.527 11:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.527 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.527 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.527 11:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.527 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.470 nvme0n1 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.470 11:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.040 nvme0n1 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.040 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.041 11:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.982 nvme0n1 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.982 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.242 nvme0n1 00:28:14.242 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.242 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.242 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.243 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.504 nvme0n1 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.504 11:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.504 nvme0n1 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.504 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.764 nvme0n1 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.764 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.024 nvme0n1 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:15.024 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:15.284 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:15.284 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.284 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.285 nvme0n1 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.285 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.545 11:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.545 nvme0n1 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.545 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.806 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.807 nvme0n1 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.807 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.068 nvme0n1 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.068 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.330 11:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.330 nvme0n1 00:28:16.330 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.330 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.330 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.330 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.330 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.330 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.591 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.592 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.852 nvme0n1 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.852 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.853 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.114 nvme0n1 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.114 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.375 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.375 11:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.375 11:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.375 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.375 11:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.635 nvme0n1 00:28:17.635 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.635 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.635 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.636 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.896 nvme0n1 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.896 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.157 nvme0n1 00:28:18.157 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.157 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.157 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.157 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.157 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.157 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.417 11:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.678 nvme0n1 00:28:18.678 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.678 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.678 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.678 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.678 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.939 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.940 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.511 nvme0n1 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.511 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.512 11:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.512 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.512 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.512 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.772 nvme0n1 00:28:19.772 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.772 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.772 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.772 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.772 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.772 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.033 11:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.605 nvme0n1 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.605 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.606 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.867 nvme0n1 00:28:20.867 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.867 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.867 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.867 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.867 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.867 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGYwM2FhNTM1OTFiNzE2ZDkyNjBlMDE4MzU2Y2E1NGXuRRTj: 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGZkOGRjYzNhYjAzZWU4NWNlYWIyMDIwMzhjNDRmMzA4YzEyZGEzNDY1NjNhNGU1ZmMyNzBiYWRjOWViNWE4YXkn0/o=: 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.128 11:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.700 nvme0n1 00:28:21.700 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.700 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.700 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.700 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.700 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.961 11:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.531 nvme0n1 00:28:22.531 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.531 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.531 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.531 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.531 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDI5MzU1YzE4NzFlOTc0ZTk5ODI0MjFlNTZjYmM4ZTK59Yqz: 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2ZiMTUwYTU2NWE0ODljMjZmYTdkMzk2MDcxZjg1MWPozbZY: 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.792 11:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.363 nvme0n1 00:28:23.363 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.363 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.363 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.363 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.363 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDZjYzYxY2YwYWQzMDYxM2U5NjlkNjEzZmExYWU1YzRhMGEwOWViMmM3MzEyN2VhqJv+5A==: 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGZhMzEwZTNiOGU2N2ZiMjAxMmU1NGIwMWIyZjljOTksAEKp: 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.624 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.196 nvme0n1 00:28:24.196 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.196 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.196 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.196 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.196 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.196 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTQ1NWRkNzBkM2Q2ZDE1MDlhZGIwZjhhZTZjNGVlYzY1N2FjZWE1NzM0Zjg2Y2JhYzUwMTkwZDMwNmQzMGY5MfDtIQk=: 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.457 11:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.031 nvme0n1 00:28:25.031 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.031 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.031 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.031 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.031 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:25.291 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJlNWZjZTJhZTBhNzAyNDNhMzg2MzVmMzlkNTU5NTliZGQ3ZDA1OTg2YjczYTk3+z1iMA==: 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjkyNDE5YWRhYzlmZDJmNDVjMThhM2MzMjZmNDMwMzE5MjdiN2NjYjVjNGU5YWFiuiuO6Q==: 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.292 request: 00:28:25.292 { 00:28:25.292 "name": "nvme0", 00:28:25.292 "trtype": "tcp", 00:28:25.292 "traddr": "10.0.0.1", 00:28:25.292 "adrfam": "ipv4", 00:28:25.292 "trsvcid": "4420", 00:28:25.292 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.292 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.292 "prchk_reftag": false, 00:28:25.292 "prchk_guard": false, 00:28:25.292 "hdgst": false, 00:28:25.292 "ddgst": false, 00:28:25.292 "method": "bdev_nvme_attach_controller", 00:28:25.292 "req_id": 1 00:28:25.292 } 00:28:25.292 Got JSON-RPC error response 00:28:25.292 response: 00:28:25.292 { 00:28:25.292 "code": -5, 00:28:25.292 "message": "Input/output error" 00:28:25.292 } 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.292 request: 00:28:25.292 { 00:28:25.292 "name": "nvme0", 00:28:25.292 "trtype": "tcp", 00:28:25.292 "traddr": "10.0.0.1", 00:28:25.292 "adrfam": "ipv4", 00:28:25.292 "trsvcid": "4420", 00:28:25.292 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.292 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.292 "prchk_reftag": false, 00:28:25.292 "prchk_guard": false, 00:28:25.292 "hdgst": false, 00:28:25.292 "ddgst": false, 00:28:25.292 "dhchap_key": "key2", 00:28:25.292 "method": "bdev_nvme_attach_controller", 00:28:25.292 "req_id": 1 00:28:25.292 } 00:28:25.292 Got JSON-RPC error response 00:28:25.292 response: 00:28:25.292 { 00:28:25.292 "code": -5, 00:28:25.292 "message": "Input/output error" 00:28:25.292 } 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.292 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.553 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.553 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.553 11:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:25.553 11:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.553 request: 00:28:25.553 { 00:28:25.553 "name": "nvme0", 00:28:25.553 "trtype": "tcp", 00:28:25.553 "traddr": "10.0.0.1", 00:28:25.553 "adrfam": "ipv4", 00:28:25.553 "trsvcid": "4420", 00:28:25.553 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:25.553 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:25.553 "prchk_reftag": false, 00:28:25.553 "prchk_guard": false, 00:28:25.553 "hdgst": false, 00:28:25.553 "ddgst": false, 00:28:25.553 "dhchap_key": "key1", 00:28:25.553 "dhchap_ctrlr_key": "ckey2", 00:28:25.553 "method": "bdev_nvme_attach_controller", 00:28:25.553 "req_id": 1 00:28:25.553 } 00:28:25.553 Got JSON-RPC error response 00:28:25.553 response: 00:28:25.553 { 00:28:25.553 "code": -5, 00:28:25.553 "message": "Input/output error" 00:28:25.553 } 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:25.553 rmmod nvme_tcp 00:28:25.553 rmmod nvme_fabrics 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3703686 ']' 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3703686 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3703686 ']' 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3703686 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3703686 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3703686' 00:28:25.553 killing process with pid 3703686 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3703686 00:28:25.553 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3703686 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.814 11:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.729 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.729 11:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:27.729 11:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:27.729 11:39:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:27.729 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:27.729 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:27.990 11:39:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:31.293 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:31.293 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:31.554 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:31.554 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:31.837 11:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PPV /tmp/spdk.key-null.CMF /tmp/spdk.key-sha256.YUY /tmp/spdk.key-sha384.ZRp /tmp/spdk.key-sha512.h22 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:31.837 11:40:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:35.140 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:35.140 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:35.140 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:35.403 00:28:35.403 real 0m58.620s 00:28:35.403 user 0m52.468s 00:28:35.403 sys 0m14.921s 00:28:35.403 11:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.403 11:40:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.403 ************************************ 00:28:35.403 END TEST nvmf_auth_host 00:28:35.403 ************************************ 00:28:35.403 11:40:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:35.403 11:40:03 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:35.403 11:40:03 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:35.403 11:40:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:35.403 11:40:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.403 11:40:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.403 ************************************ 00:28:35.403 START TEST nvmf_digest 00:28:35.403 ************************************ 00:28:35.403 11:40:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:35.669 * Looking for test storage... 00:28:35.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.669 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.670 11:40:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:43.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:43.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:43.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:43.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:28:43.866 00:28:43.866 --- 10.0.0.2 ping statistics --- 00:28:43.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.866 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:28:43.866 00:28:43.866 --- 10.0.0.1 ping statistics --- 00:28:43.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.866 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:43.866 11:40:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:43.867 ************************************ 00:28:43.867 START TEST nvmf_digest_clean 00:28:43.867 ************************************ 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3720333 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3720333 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3720333 ']' 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.867 11:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.867 [2024-07-15 11:40:11.518718] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:28:43.867 [2024-07-15 11:40:11.518777] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.867 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.867 [2024-07-15 11:40:11.589419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.867 [2024-07-15 11:40:11.663015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.867 [2024-07-15 11:40:11.663053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.867 [2024-07-15 11:40:11.663061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.867 [2024-07-15 11:40:11.663068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.867 [2024-07-15 11:40:11.663073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.867 [2024-07-15 11:40:11.663092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.867 null0 00:28:43.867 [2024-07-15 11:40:12.401600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.867 [2024-07-15 11:40:12.425765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3720424 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3720424 /var/tmp/bperf.sock 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3720424 ']' 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.867 11:40:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:43.867 [2024-07-15 11:40:12.479442] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:28:43.867 [2024-07-15 11:40:12.479492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720424 ] 00:28:43.867 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.867 [2024-07-15 11:40:12.554730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.165 [2024-07-15 11:40:12.618999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.737 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.737 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:44.737 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:44.737 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:44.737 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.996 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.996 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.257 nvme0n1 00:28:45.257 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:45.257 11:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.257 Running I/O for 2 seconds... 00:28:47.169 00:28:47.169 Latency(us) 00:28:47.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.169 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:47.169 nvme0n1 : 2.00 20600.08 80.47 0.00 0.00 6206.10 2867.20 17476.27 00:28:47.169 =================================================================================================================== 00:28:47.169 Total : 20600.08 80.47 0.00 0.00 6206.10 2867.20 17476.27 00:28:47.169 0 00:28:47.169 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:47.169 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:47.169 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:47.169 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:47.169 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:47.169 | select(.opcode=="crc32c") 00:28:47.169 | "\(.module_name) \(.executed)"' 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3720424 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3720424 ']' 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3720424 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.431 11:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3720424 00:28:47.431 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:47.431 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:47.431 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3720424' 00:28:47.431 killing process with pid 3720424 00:28:47.431 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3720424 00:28:47.431 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.431 00:28:47.431 Latency(us) 00:28:47.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.431 =================================================================================================================== 00:28:47.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.431 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3720424 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3721113 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3721113 /var/tmp/bperf.sock 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3721113 ']' 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.692 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:47.692 [2024-07-15 11:40:16.217753] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:28:47.692 [2024-07-15 11:40:16.217809] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721113 ] 00:28:47.692 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.692 Zero copy mechanism will not be used. 00:28:47.692 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.692 [2024-07-15 11:40:16.293678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.692 [2024-07-15 11:40:16.347485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.264 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.264 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:48.264 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:48.264 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:48.264 11:40:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:48.526 11:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.526 11:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.097 nvme0n1 00:28:49.097 11:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:49.097 11:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:49.097 Zero copy mechanism will not be used. 00:28:49.097 Running I/O for 2 seconds... 00:28:51.011 00:28:51.011 Latency(us) 00:28:51.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.011 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:51.011 nvme0n1 : 2.00 2371.66 296.46 0.00 0.00 6743.31 4096.00 11578.03 00:28:51.011 =================================================================================================================== 00:28:51.011 Total : 2371.66 296.46 0.00 0.00 6743.31 4096.00 11578.03 00:28:51.011 0 00:28:51.012 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:51.012 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:51.012 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:51.012 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:51.012 | select(.opcode=="crc32c") 00:28:51.012 | "\(.module_name) \(.executed)"' 00:28:51.012 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3721113 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3721113 ']' 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3721113 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721113 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721113' 00:28:51.272 killing process with pid 3721113 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3721113 00:28:51.272 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.272 00:28:51.272 Latency(us) 00:28:51.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.272 =================================================================================================================== 00:28:51.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.272 11:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3721113 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3721912 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3721912 /var/tmp/bperf.sock 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3721912 ']' 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.533 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:51.533 [2024-07-15 11:40:20.052002] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:28:51.533 [2024-07-15 11:40:20.052061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721912 ] 00:28:51.533 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.533 [2024-07-15 11:40:20.126300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.533 [2024-07-15 11:40:20.179723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.478 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.478 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:52.478 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:52.478 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:52.478 11:40:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:52.478 11:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.478 11:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.738 nvme0n1 00:28:52.738 11:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:52.738 11:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.999 Running I/O for 2 seconds... 00:28:54.912 00:28:54.912 Latency(us) 00:28:54.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.912 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.912 nvme0n1 : 2.01 22071.35 86.22 0.00 0.00 5791.48 3986.77 12670.29 00:28:54.912 =================================================================================================================== 00:28:54.912 Total : 22071.35 86.22 0.00 0.00 5791.48 3986.77 12670.29 00:28:54.912 0 00:28:54.912 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:54.912 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:54.912 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:54.912 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:54.912 | select(.opcode=="crc32c") 00:28:54.912 | "\(.module_name) \(.executed)"' 00:28:54.912 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3721912 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3721912 ']' 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3721912 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:55.172 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721912 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721912' 00:28:55.173 killing process with pid 3721912 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3721912 00:28:55.173 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.173 00:28:55.173 Latency(us) 00:28:55.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.173 =================================================================================================================== 00:28:55.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3721912 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3722736 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3722736 /var/tmp/bperf.sock 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3722736 ']' 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.173 11:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.433 [2024-07-15 11:40:23.879074] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:28:55.433 [2024-07-15 11:40:23.879191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722736 ] 00:28:55.433 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.433 Zero copy mechanism will not be used. 00:28:55.433 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.433 [2024-07-15 11:40:23.954911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.433 [2024-07-15 11:40:24.007852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.005 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.005 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:56.005 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:56.005 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:56.005 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:56.267 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.267 11:40:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.528 nvme0n1 00:28:56.788 11:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:56.788 11:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:56.788 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.788 Zero copy mechanism will not be used. 00:28:56.788 Running I/O for 2 seconds... 00:28:58.761 00:28:58.761 Latency(us) 00:28:58.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.761 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:58.761 nvme0n1 : 2.01 3609.33 451.17 0.00 0.00 4425.33 2020.69 17367.04 00:28:58.761 =================================================================================================================== 00:28:58.761 Total : 3609.33 451.17 0.00 0.00 4425.33 2020.69 17367.04 00:28:58.761 0 00:28:58.761 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:58.761 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:58.761 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:58.761 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:58.761 | select(.opcode=="crc32c") 00:28:58.761 | "\(.module_name) \(.executed)"' 00:28:58.761 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3722736 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3722736 ']' 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3722736 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722736 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722736' 00:28:59.022 killing process with pid 3722736 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3722736 00:28:59.022 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.022 00:28:59.022 Latency(us) 00:28:59.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.022 =================================================================================================================== 00:28:59.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3722736 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3720333 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3720333 ']' 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3720333 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:59.022 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3720333 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3720333' 00:28:59.283 killing process with pid 3720333 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3720333 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3720333 00:28:59.283 00:28:59.283 real 0m16.422s 00:28:59.283 user 0m32.138s 00:28:59.283 sys 0m3.349s 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:59.283 ************************************ 00:28:59.283 END TEST nvmf_digest_clean 00:28:59.283 ************************************ 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:59.283 ************************************ 00:28:59.283 START TEST nvmf_digest_error 00:28:59.283 ************************************ 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3723511 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3723511 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3723511 ']' 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.283 11:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.544 [2024-07-15 11:40:28.015282] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:28:59.544 [2024-07-15 11:40:28.015328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.544 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.544 [2024-07-15 11:40:28.081354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.544 [2024-07-15 11:40:28.146347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.544 [2024-07-15 11:40:28.146389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.544 [2024-07-15 11:40:28.146397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.544 [2024-07-15 11:40:28.146403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.544 [2024-07-15 11:40:28.146409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.544 [2024-07-15 11:40:28.146433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.113 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:00.113 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:00.113 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.113 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.113 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.374 [2024-07-15 11:40:28.832385] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.374 null0 00:29:00.374 [2024-07-15 11:40:28.913272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.374 [2024-07-15 11:40:28.937455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3723715 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3723715 /var/tmp/bperf.sock 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3723715 ']' 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.374 11:40:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.374 [2024-07-15 11:40:28.991597] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:00.374 [2024-07-15 11:40:28.991647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723715 ] 00:29:00.374 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.374 [2024-07-15 11:40:29.064921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.635 [2024-07-15 11:40:29.118332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.206 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.206 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:01.206 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.206 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.466 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:01.466 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.466 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.466 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.466 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.466 11:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.466 nvme0n1 00:29:01.726 11:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:01.726 11:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.726 11:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.726 11:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.726 11:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.726 11:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.726 Running I/O for 2 seconds... 00:29:01.726 [2024-07-15 11:40:30.294541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.726 [2024-07-15 11:40:30.294571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.726 [2024-07-15 11:40:30.294580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.726 [2024-07-15 11:40:30.307580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.307600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.307607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.321217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.321236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.321246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.334272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.334291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.334298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.346103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.346124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.346131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.358254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.358272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.358279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.370825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.370843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.370850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.382871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.382890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.382896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.395077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.395095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.395101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.407874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.407892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.407899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.727 [2024-07-15 11:40:30.419446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.727 [2024-07-15 11:40:30.419463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.727 [2024-07-15 11:40:30.419469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.431729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.431751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.431758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.444348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.444366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.444372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.455681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.455699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.455706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.467673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.467692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.467698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.481882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.481899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.481905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.493999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.494016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.494022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.504810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.504833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.517961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.517978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.517984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.530651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.530668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.530675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.543106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.543127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.543134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.555405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.555422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.567299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.567317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.567324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.578894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.578912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.578918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.591752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.591769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.591775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.602823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.602841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.602847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.616274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.616291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.616297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.628613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.628630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.628636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.641566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.641584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.987 [2024-07-15 11:40:30.641593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.987 [2024-07-15 11:40:30.652990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.987 [2024-07-15 11:40:30.653007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.988 [2024-07-15 11:40:30.653013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.988 [2024-07-15 11:40:30.665735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.988 [2024-07-15 11:40:30.665752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.988 [2024-07-15 11:40:30.665758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.988 [2024-07-15 11:40:30.676950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:01.988 [2024-07-15 11:40:30.676967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.988 [2024-07-15 11:40:30.676973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.689402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.689419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.689425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.701889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.701906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.701913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.714325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.714342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.714349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.728212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.728230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.728237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.739285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.739303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.739309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.752666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.752684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.752690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.764870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.764888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.764894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.776238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.776255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.776261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.788103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.788120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.788131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.801538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.801556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.801562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.814414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.814432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.814439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.825764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.825781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.825788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.838409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.838427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.838434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.248 [2024-07-15 11:40:30.850465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.248 [2024-07-15 11:40:30.850483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.248 [2024-07-15 11:40:30.850493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.862365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.862382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.862389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.875983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.876000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.876006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.888214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.888231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.888237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.899943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.899959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.899966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.912329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.912346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.912352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.924564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.924581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.924588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.249 [2024-07-15 11:40:30.936699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.249 [2024-07-15 11:40:30.936716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.249 [2024-07-15 11:40:30.936722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-07-15 11:40:30.949695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.509 [2024-07-15 11:40:30.949713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-07-15 11:40:30.949720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-07-15 11:40:30.961190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.509 [2024-07-15 11:40:30.961211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.509 [2024-07-15 11:40:30.961217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.509 [2024-07-15 11:40:30.973254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:30.973271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:30.973278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:30.986271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:30.986288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:30.986295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:30.998982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:30.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:30.999006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.010761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.010778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.010785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.024108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.024128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.024135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.034882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.034899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.034905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.048722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.048740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.048746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.059673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.059690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.059696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.072345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.072364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.072370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.085573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.085589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.085596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.097439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.097455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.097461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.109967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.109983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.109989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.122288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.122304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.122311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.134890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.134906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.134912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.145490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.145506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.157675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.157692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.157698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.171192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.171208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.171217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.183378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.183395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.183401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.195125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.195142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.195148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.510 [2024-07-15 11:40:31.207243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.510 [2024-07-15 11:40:31.207260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.510 [2024-07-15 11:40:31.207266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.219135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.219152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.231379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.231396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.231402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.244229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.244245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.244252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.254831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.254848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.254854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.269076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.269092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.269098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.281390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.281409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.281416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.293195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.293212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.293219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.305231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.305248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.305254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.317249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.317266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.317272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.329874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.329890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.329897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.341842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.341859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.341865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.354173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.354190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.354197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.366426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.366443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.366449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.378968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.378985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.378991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.390777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.390794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.390800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.402692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.402715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.414196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.414213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.414219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.426694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.426711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.426717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.439293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.439310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.439316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.451247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.451264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.780 [2024-07-15 11:40:31.451271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.780 [2024-07-15 11:40:31.464276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.780 [2024-07-15 11:40:31.464293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.781 [2024-07-15 11:40:31.464299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.781 [2024-07-15 11:40:31.476612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:02.781 [2024-07-15 11:40:31.476629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.781 [2024-07-15 11:40:31.476636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.489673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.041 [2024-07-15 11:40:31.489690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.041 [2024-07-15 11:40:31.489703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.500547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.041 [2024-07-15 11:40:31.500563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.041 [2024-07-15 11:40:31.500570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.512917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.041 [2024-07-15 11:40:31.512934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.041 [2024-07-15 11:40:31.512940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.525693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.041 [2024-07-15 11:40:31.525709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.041 [2024-07-15 11:40:31.525716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.538231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.041 [2024-07-15 11:40:31.538247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.041 [2024-07-15 11:40:31.538254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.550748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.041 [2024-07-15 11:40:31.550765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.041 [2024-07-15 11:40:31.550771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.041 [2024-07-15 11:40:31.562309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.562326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.562332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.574963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.574980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.574986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.586905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.586921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.586928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.599212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.599229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.599235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.611776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.611792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.611799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.624605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.624622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.624628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.637756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.637773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.637779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.649483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.649500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.649507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.662020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.662037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.662043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.672928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.672944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.685796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.685812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.685818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.698263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.698280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.698289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.711168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.711185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.711191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.723585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.723601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.723608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.042 [2024-07-15 11:40:31.735985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.042 [2024-07-15 11:40:31.736002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.042 [2024-07-15 11:40:31.736008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.748294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.748311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.748317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.760205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.760222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.760228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.772082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.772099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.772105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.784829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.784846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.784853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.796711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.796730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.796736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.810114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.810143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.822236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.822253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.822259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.834080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.834097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.834103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.845291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.845308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.845314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.858642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.858659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.858665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.870437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.870455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.870461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.882156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.882173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.882180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.895397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.895413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.895420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.907583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.907600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.907606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.920235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.920252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.920258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.932208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.932226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.932232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.945301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.945318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.945324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.957016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.957033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.957039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.969603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.969620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.969626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.981238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.981254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.981261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.302 [2024-07-15 11:40:31.994269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.302 [2024-07-15 11:40:31.994285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.302 [2024-07-15 11:40:31.994292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.561 [2024-07-15 11:40:32.004958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.561 [2024-07-15 11:40:32.004975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.561 [2024-07-15 11:40:32.004981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.561 [2024-07-15 11:40:32.018127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.018144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.018154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.030651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.030669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.030676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.041457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.041474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.041481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.054581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.054599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.054606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.066682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.066699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.066706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.078498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.078515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.078521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.091024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.091041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.091047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.103082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.103099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.103106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.115426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.115444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.115450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.128204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.128224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.128230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.138904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.138921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.138927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.151756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.151773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.151780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.163561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.163578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.163584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.176315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.176332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.176338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.188886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.188904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.188911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.202243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.202261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.202267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.213225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.213242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.213248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.226310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.226327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.226333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.239055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.239072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.239079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.562 [2024-07-15 11:40:32.251193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.562 [2024-07-15 11:40:32.251210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.562 [2024-07-15 11:40:32.251216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.563 [2024-07-15 11:40:32.262336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.563 [2024-07-15 11:40:32.262353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.563 [2024-07-15 11:40:32.262360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.822 [2024-07-15 11:40:32.275155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8028e0) 00:29:03.822 [2024-07-15 11:40:32.275172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.822 [2024-07-15 11:40:32.275179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.822 00:29:03.822 Latency(us) 00:29:03.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:03.822 nvme0n1 : 2.00 20746.82 81.04 0.00 0.00 6163.78 3085.65 14964.05 00:29:03.822 =================================================================================================================== 00:29:03.822 Total : 20746.82 81.04 0.00 0.00 6163.78 3085.65 14964.05 00:29:03.822 0 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.822 | .driver_specific 00:29:03.822 | .nvme_error 00:29:03.822 | .status_code 00:29:03.822 | .command_transient_transport_error' 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3723715 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3723715 ']' 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3723715 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723715 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723715' 00:29:03.822 killing process with pid 3723715 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3723715 00:29:03.822 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.822 00:29:03.822 Latency(us) 00:29:03.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.822 =================================================================================================================== 00:29:03.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.822 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3723715 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3724477 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3724477 /var/tmp/bperf.sock 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3724477 ']' 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.082 11:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.082 [2024-07-15 11:40:32.683629] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:04.082 [2024-07-15 11:40:32.683684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724477 ] 00:29:04.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.082 Zero copy mechanism will not be used. 00:29:04.082 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.082 [2024-07-15 11:40:32.756669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.342 [2024-07-15 11:40:32.809521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.912 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.485 nvme0n1 00:29:05.485 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:05.485 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.485 11:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.485 11:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.485 11:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:05.485 11:40:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.485 Zero copy mechanism will not be used. 00:29:05.485 Running I/O for 2 seconds... 00:29:05.485 [2024-07-15 11:40:34.099077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.099109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.099119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.485 [2024-07-15 11:40:34.113221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.113244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.113251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.485 [2024-07-15 11:40:34.126033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.126053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.126060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.485 [2024-07-15 11:40:34.138754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.138773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.138780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.485 [2024-07-15 11:40:34.150722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.150741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.150748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.485 [2024-07-15 11:40:34.165445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.165468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.165474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.485 [2024-07-15 11:40:34.179159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.485 [2024-07-15 11:40:34.179177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.485 [2024-07-15 11:40:34.179184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.747 [2024-07-15 11:40:34.193192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.747 [2024-07-15 11:40:34.193210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.193217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.206347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.206365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.206372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.219234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.219253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.219259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.234116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.234138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.234145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.246751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.246769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.246776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.261396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.261414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.261420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.273204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.273223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.273229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.286955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.286973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.286979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.302542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.302561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.302567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.312992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.313011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.313017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.327884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.327902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.343030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.343047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.343053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.352942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.352961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.352967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.367750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.367768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.367774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.381854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.381872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.381879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.397294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.397312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.397323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.411746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.411763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.411770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.424661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.424679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.424686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.435094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.435111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.435118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.748 [2024-07-15 11:40:34.447597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:05.748 [2024-07-15 11:40:34.447615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.748 [2024-07-15 11:40:34.447622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.460679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.460698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.460704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.474676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.474694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.474701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.486949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.486966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.486973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.501423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.501441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.501448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.516248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.516266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.516272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.531415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.531433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.531439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.545862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.545880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.545886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.562096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.562113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.576642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.576659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.576665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.592142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.592159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.592166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.606904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.606921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.606928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.620551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.620567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.620574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.634236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.634253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.634262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.646877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.646894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.646900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.660249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.660266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.660272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.674858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.674875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.674881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.686945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.011 [2024-07-15 11:40:34.686962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 11:40:34.686969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 11:40:34.701598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.012 [2024-07-15 11:40:34.701616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.012 [2024-07-15 11:40:34.701622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.714559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.714578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.714586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.729257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.729274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.729281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.742446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.742464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.742470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.757985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.758009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.770695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.770713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.770719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.785835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.785852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.798825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.798842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.798848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.812265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.812283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.812289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.824909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.824926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.824933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.837027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.837044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.837050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.849007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.849025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.849031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.862419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.862436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.862442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.876046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.876064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.876071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.890444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.890461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.890467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.904801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.904818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.904825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.918308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.918326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.918332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.933178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.933196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.933203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.946535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.946553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.946560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.957936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.957953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.957959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.273 [2024-07-15 11:40:34.970497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.273 [2024-07-15 11:40:34.970514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.273 [2024-07-15 11:40:34.970521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:34.981383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:34.981401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:34.981411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:34.991030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:34.991048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:34.991054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.004244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.004262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.004269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.017483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.017500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.017507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.031499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.031517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.031523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.046084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.046102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.046109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.059183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.059200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.059207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.075831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.075850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.075856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.088915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.088933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.088939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.101178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.101205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.116219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.116237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.116244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.129139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.129156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.129162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.535 [2024-07-15 11:40:35.142907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.535 [2024-07-15 11:40:35.142925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.535 [2024-07-15 11:40:35.142932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.155007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.155026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.155032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.168673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.168691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.168698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.179831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.179850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.179856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.192530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.192548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.192554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.205480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.205497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.205504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.217436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.217455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.217461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.536 [2024-07-15 11:40:35.231820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.536 [2024-07-15 11:40:35.231839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.536 [2024-07-15 11:40:35.231845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.246516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.246535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.246541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.261901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.261919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.261925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.273131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.273148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.273155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.284060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.284079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.284085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.296624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.296642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.296649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.308679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.308698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.308704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.321743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.321763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.321773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.335170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.335188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.335195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.349730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.349749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.349755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.365105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.797 [2024-07-15 11:40:35.365127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.797 [2024-07-15 11:40:35.365134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.797 [2024-07-15 11:40:35.379877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.379895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.379901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.393494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.393513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.393519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.408377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.408396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.408402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.420527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.420545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.420552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.433866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.433884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.433890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.446638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.446656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.446663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.461484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.461502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.461509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.474142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.474161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.474167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.798 [2024-07-15 11:40:35.488522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:06.798 [2024-07-15 11:40:35.488540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.798 [2024-07-15 11:40:35.488547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.503971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.503990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.503996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.519484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.519502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.519509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.535161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.535180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.535186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.548382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.548402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.548409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.561586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.561605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.561615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.576969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.576988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.576994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.594576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.594594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.594601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.060 [2024-07-15 11:40:35.608108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.060 [2024-07-15 11:40:35.608131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.060 [2024-07-15 11:40:35.608137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.621564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.621583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.621590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.635464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.635483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.649643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.649662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.649668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.662564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.662583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.662589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.675736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.675754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.675761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.688216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.688238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.688245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.702721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.702741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.702747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.715814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.715833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.715839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.730798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.730816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.730822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.742196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.742215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.742222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.061 [2024-07-15 11:40:35.754787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.061 [2024-07-15 11:40:35.754807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.061 [2024-07-15 11:40:35.754813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.322 [2024-07-15 11:40:35.767706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.322 [2024-07-15 11:40:35.767726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.322 [2024-07-15 11:40:35.767732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.322 [2024-07-15 11:40:35.779064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.779083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.779089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.792829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.792848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.792854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.806147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.806166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.806172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.820036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.820055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.820062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.834775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.834794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.834800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.845444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.845462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.845468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.859281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.859300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.859306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.874795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.874814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.874821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.887735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.887753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.887759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.899641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.899659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.899665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.912248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.912266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.912277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.925660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.925678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.925685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.938345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.938364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.938371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.951600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.951618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.951625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.964424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.964443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.964449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.976389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.976407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.976413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.986263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.986281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.986287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:35.998744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:35.998762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:35.998768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.323 [2024-07-15 11:40:36.011037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.323 [2024-07-15 11:40:36.011055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-07-15 11:40:36.011062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.585 [2024-07-15 11:40:36.024866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.585 [2024-07-15 11:40:36.024887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.585 [2024-07-15 11:40:36.024894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.585 [2024-07-15 11:40:36.040149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.585 [2024-07-15 11:40:36.040168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.585 [2024-07-15 11:40:36.040174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.585 [2024-07-15 11:40:36.054443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.585 [2024-07-15 11:40:36.054461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.585 [2024-07-15 11:40:36.054468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.585 [2024-07-15 11:40:36.068601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.585 [2024-07-15 11:40:36.068619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.585 [2024-07-15 11:40:36.068626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.585 [2024-07-15 11:40:36.080406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9aab80) 00:29:07.585 [2024-07-15 11:40:36.080425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.585 [2024-07-15 11:40:36.080431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.585 00:29:07.585 Latency(us) 00:29:07.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.585 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:07.585 nvme0n1 : 2.00 2300.25 287.53 0.00 0.00 6952.01 1884.16 17257.81 00:29:07.585 =================================================================================================================== 00:29:07.585 Total : 2300.25 287.53 0.00 0.00 6952.01 1884.16 17257.81 00:29:07.585 0 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:07.585 | .driver_specific 00:29:07.585 | .nvme_error 00:29:07.585 | .status_code 00:29:07.585 | .command_transient_transport_error' 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3724477 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3724477 ']' 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3724477 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:07.585 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724477 00:29:07.846 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:07.846 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724477' 00:29:07.847 killing process with pid 3724477 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3724477 00:29:07.847 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.847 00:29:07.847 Latency(us) 00:29:07.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.847 =================================================================================================================== 00:29:07.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3724477 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3725225 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3725225 /var/tmp/bperf.sock 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3725225 ']' 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.847 11:40:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.847 [2024-07-15 11:40:36.472514] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:07.847 [2024-07-15 11:40:36.472571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725225 ] 00:29:07.847 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.108 [2024-07-15 11:40:36.548804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.108 [2024-07-15 11:40:36.601424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.680 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.680 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:08.680 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.680 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.941 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:08.941 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.941 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.941 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.941 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.202 nvme0n1 00:29:09.202 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:09.202 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.202 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.202 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.202 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:09.202 11:40:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.202 Running I/O for 2 seconds... 00:29:09.202 [2024-07-15 11:40:37.868612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190ebfd0 00:29:09.202 [2024-07-15 11:40:37.870348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-07-15 11:40:37.870377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:09.202 [2024-07-15 11:40:37.878898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.202 [2024-07-15 11:40:37.879988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-07-15 11:40:37.880007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.202 [2024-07-15 11:40:37.890671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.202 [2024-07-15 11:40:37.891806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.202 [2024-07-15 11:40:37.891824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.202 [2024-07-15 11:40:37.902413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.903587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.903606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.914202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.915370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.915387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.925961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.927093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.927110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.937737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.938874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.938891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.949492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.950650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.950667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.961258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.962429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.962445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.972992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.974165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.974182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.984741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.985896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.985913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:37.996498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:37.997654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:37.997671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:38.008240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:38.009417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:38.009433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:38.020087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:38.021221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:38.021237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:38.031846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:38.032976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:38.032993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.463 [2024-07-15 11:40:38.043567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.463 [2024-07-15 11:40:38.044722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.463 [2024-07-15 11:40:38.044739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.055335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.056520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.056536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.067066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.068237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.068254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.078834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.079955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.079971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.090606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.091724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.091739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.102340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.103705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.103721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.114266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.115385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.115402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.126002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.127130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.127152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.137728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.138888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.138904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.149467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.150637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.150653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.464 [2024-07-15 11:40:38.161195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.464 [2024-07-15 11:40:38.162352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.464 [2024-07-15 11:40:38.162368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.172930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.174092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.174108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.184691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.185853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.196431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.197605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.197621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.208167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.209324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.209340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.219887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.221043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.221059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.231616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.232779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.232796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.725 [2024-07-15 11:40:38.243353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.725 [2024-07-15 11:40:38.244510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.725 [2024-07-15 11:40:38.244525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.255078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.256255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.256271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.266869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.268041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.268056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.278579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.279754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.279770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.290287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.291454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.291470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.301990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.303155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.313727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.314902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.314918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.325486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.326613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.326629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.337202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.338371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.338386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.348911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.350070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.350086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.360618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.361790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.361806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.372344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.373522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.373538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.384071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.385239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.385256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.395799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.396966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.396982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.407517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.408676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.408692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.726 [2024-07-15 11:40:38.419218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.726 [2024-07-15 11:40:38.420380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.726 [2024-07-15 11:40:38.420396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.430937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.432100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.432119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.442671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.443825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.443841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.454412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.455585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.455601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.466132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.467293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.467309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.477959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.479135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.479151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.489685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.490844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.490860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.501413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.502588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.502603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.513137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.514271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.514287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.524871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.526035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.526051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.536576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.537760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.548308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.549475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.549491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.560014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.561181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.561197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.571746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.572915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.572930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.583444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.584620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.584636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.595157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.596277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.596293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.606840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.608031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.618561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.619737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.619753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.630288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.631471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.988 [2024-07-15 11:40:38.631486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.988 [2024-07-15 11:40:38.642010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.988 [2024-07-15 11:40:38.643169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.989 [2024-07-15 11:40:38.643184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.989 [2024-07-15 11:40:38.653730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.989 [2024-07-15 11:40:38.654900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.989 [2024-07-15 11:40:38.654916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.989 [2024-07-15 11:40:38.665469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.989 [2024-07-15 11:40:38.666632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.989 [2024-07-15 11:40:38.666648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.989 [2024-07-15 11:40:38.677178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:09.989 [2024-07-15 11:40:38.678338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.989 [2024-07-15 11:40:38.678353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:09.989 [2024-07-15 11:40:38.688929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.690102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.690118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.700663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.701822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.701838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.712402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.713566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.713581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.724129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.725270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.725285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.735837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.736987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.737006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.747541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.748695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.748711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.759255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.760390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.760406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.770962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.772128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.772144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.782679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.783871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.794431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.795611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.795627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.806140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.807296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.807312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.817840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.819000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.819015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.829582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.830745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.830761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.841308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.842463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.842480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.250 [2024-07-15 11:40:38.853013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.250 [2024-07-15 11:40:38.854183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.250 [2024-07-15 11:40:38.854198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.864718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.865880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.865895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.876457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.877630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.877645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.888215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.889390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.889406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.899937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.901094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.901110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.911660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.912809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.912825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.923380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.924537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.924552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.935070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.936228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.936244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.251 [2024-07-15 11:40:38.946815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.251 [2024-07-15 11:40:38.947987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.251 [2024-07-15 11:40:38.948002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.512 [2024-07-15 11:40:38.958530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.512 [2024-07-15 11:40:38.959670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.512 [2024-07-15 11:40:38.959686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.512 [2024-07-15 11:40:38.970253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.512 [2024-07-15 11:40:38.971413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.512 [2024-07-15 11:40:38.971429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.512 [2024-07-15 11:40:38.981941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.512 [2024-07-15 11:40:38.983100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.512 [2024-07-15 11:40:38.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:38.993660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:38.994777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:38.994792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.005372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.006546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.006562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.017101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.018286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.028950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.030108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.030125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.040689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.041852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.041871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.052422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.053588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.053603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.064170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.065335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.065350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.075922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.077089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.077105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.087672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.088828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.088844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.099424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.100587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.100603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.111366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.112525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.112541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.123099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.124271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.124287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.134857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.136020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.136035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.146586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.147745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.147763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.158355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.159506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.159522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.170082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.171242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.171259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.181837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.182998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.183015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.193550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.194714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.194730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.513 [2024-07-15 11:40:39.205301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.513 [2024-07-15 11:40:39.206475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.513 [2024-07-15 11:40:39.206491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.217038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.775 [2024-07-15 11:40:39.218199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.775 [2024-07-15 11:40:39.218214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.228774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.775 [2024-07-15 11:40:39.229929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.775 [2024-07-15 11:40:39.229945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.240501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.775 [2024-07-15 11:40:39.241672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.775 [2024-07-15 11:40:39.241689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.252230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.775 [2024-07-15 11:40:39.253372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.775 [2024-07-15 11:40:39.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.263961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.775 [2024-07-15 11:40:39.265082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.775 [2024-07-15 11:40:39.265097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.275701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.775 [2024-07-15 11:40:39.276871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.775 [2024-07-15 11:40:39.276887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.775 [2024-07-15 11:40:39.287438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.288598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.288614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.299178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.300347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.300363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.310932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.312097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.312114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.322673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.323841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.323857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.334434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.335598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.335615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.346166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.347316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.347332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.357884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.359050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.369603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.370743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.370759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.381339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.382522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.382539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.393077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.394248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.394264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.404810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.405970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.405986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.416550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.417721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.417737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.428279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.429436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.429452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.440041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.441201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.441218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.451779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.452942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.452964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.463529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.464697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.464715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:10.776 [2024-07-15 11:40:39.475245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:10.776 [2024-07-15 11:40:39.476407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.776 [2024-07-15 11:40:39.476425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.486974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.488148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.488164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.498689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.499827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.499843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.510414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.511593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.511609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.522172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.523329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.523345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.533903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.535072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.535089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.545639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.546810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.546826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.557384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.558548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.558564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.569095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.570263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.570280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.580818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.581978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.038 [2024-07-15 11:40:39.581994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.038 [2024-07-15 11:40:39.592547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.038 [2024-07-15 11:40:39.593718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.593733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.604265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.605447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.615970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.617149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.627710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.628869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.628885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.639433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.640609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.640625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.651169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.652335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.652351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.662883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.664055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.664071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.674604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.675776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.686316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.687508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.687525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.698097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.699271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.699287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.709834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.710991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.711009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.721591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.722721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.722737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.039 [2024-07-15 11:40:39.733309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.039 [2024-07-15 11:40:39.734460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.039 [2024-07-15 11:40:39.734476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.745032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.746203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.746219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.756739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.757894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.757913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.768482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.769636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.769652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.780211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.781332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.781348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.791928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.793089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.793106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.803653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.804821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.804838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.815380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.816532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.816548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.827098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.828258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.828274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.838815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.839967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.839983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 [2024-07-15 11:40:39.850542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b6aa0) with pdu=0x2000190f8618 00:29:11.300 [2024-07-15 11:40:39.851701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:11.300 [2024-07-15 11:40:39.851717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:11.300 00:29:11.300 Latency(us) 00:29:11.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.300 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.300 nvme0n1 : 2.00 21742.21 84.93 0.00 0.00 5879.38 3194.88 11905.71 00:29:11.300 =================================================================================================================== 00:29:11.300 Total : 21742.21 84.93 0.00 0.00 5879.38 3194.88 11905.71 00:29:11.300 0 00:29:11.300 11:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:11.300 11:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:11.300 11:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:11.300 | .driver_specific 00:29:11.300 | .nvme_error 00:29:11.300 | .status_code 00:29:11.300 | .command_transient_transport_error' 00:29:11.300 11:40:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:11.561 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3725225 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3725225 ']' 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3725225 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3725225 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3725225' 00:29:11.562 killing process with pid 3725225 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3725225 00:29:11.562 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.562 00:29:11.562 Latency(us) 00:29:11.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.562 =================================================================================================================== 00:29:11.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3725225 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3725913 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3725913 /var/tmp/bperf.sock 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3725913 ']' 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:11.562 11:40:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.823 [2024-07-15 11:40:40.266298] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:11.823 [2024-07-15 11:40:40.266352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725913 ] 00:29:11.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.823 Zero copy mechanism will not be used. 00:29:11.823 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.823 [2024-07-15 11:40:40.341439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.823 [2024-07-15 11:40:40.393545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.395 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:12.395 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:12.395 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.395 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.657 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:12.657 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.657 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.657 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.657 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.657 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.918 nvme0n1 00:29:12.918 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:12.918 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.918 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.918 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.918 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:12.918 11:40:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.918 Zero copy mechanism will not be used. 00:29:12.918 Running I/O for 2 seconds... 00:29:12.918 [2024-07-15 11:40:41.538132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.538556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.538585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.918 [2024-07-15 11:40:41.552708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.553198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.918 [2024-07-15 11:40:41.564594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.564970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.564989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.918 [2024-07-15 11:40:41.575215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.575666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.575684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.918 [2024-07-15 11:40:41.585631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.585971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.585988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.918 [2024-07-15 11:40:41.595711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.596016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.596034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.918 [2024-07-15 11:40:41.605463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.918 [2024-07-15 11:40:41.605802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.918 [2024-07-15 11:40:41.605819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.919 [2024-07-15 11:40:41.613932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:12.919 [2024-07-15 11:40:41.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.919 [2024-07-15 11:40:41.614309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.623743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.623975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.623992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.633440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.633810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.633831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.641984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.642064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.642079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.652690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.653046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.653063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.662643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.662809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.662824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.673780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.674008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.674025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.683800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.684186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.684203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.694742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.695071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.695087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.704826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.705241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.705259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.180 [2024-07-15 11:40:41.714355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.180 [2024-07-15 11:40:41.714674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.180 [2024-07-15 11:40:41.714691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.725653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.726128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.726145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.735900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.736152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.736170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.744677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.745089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.745105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.754469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.754718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.754734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.763690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.763966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.763984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.773057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.773311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.773329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.782044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.782456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.782474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.791806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.792195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.792213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.800168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.800422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.800438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.807749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.808002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.808019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.815197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.815526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.815543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.823299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.823573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.832406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.832817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.832834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.842268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.842656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.842674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.850795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.851018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.851034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.858381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.858633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.858651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.866742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.866990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.867007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.181 [2024-07-15 11:40:41.875415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.181 [2024-07-15 11:40:41.875658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.181 [2024-07-15 11:40:41.875678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.883749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.884071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.884088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.893169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.893563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.893580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.902460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.902812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.902829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.911433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.911797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.911814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.920601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.920944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.920961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.929457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.929723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.929740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.937544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.937782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.937799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.946068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.946354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.946370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.956350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.956577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.956593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.968400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.968888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.968905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.980184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.980559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.980575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:41.990869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:41.991176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:41.991193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.001564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.001961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.001978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.013067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.013488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.013505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.024811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.025125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.025141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.036414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.036778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.047546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.047775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.058643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.058987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.059004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.068791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.069219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.069237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.079474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.079892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.079910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.089824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.090269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.090286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.100938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.101243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.101260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.111875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.112324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.112340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.122212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.122660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.122677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.132171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.132518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.132534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.443 [2024-07-15 11:40:42.142881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.443 [2024-07-15 11:40:42.143278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.443 [2024-07-15 11:40:42.143301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.705 [2024-07-15 11:40:42.150770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.705 [2024-07-15 11:40:42.151008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.705 [2024-07-15 11:40:42.151025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.705 [2024-07-15 11:40:42.158284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.705 [2024-07-15 11:40:42.158536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.705 [2024-07-15 11:40:42.158552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.705 [2024-07-15 11:40:42.165537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.705 [2024-07-15 11:40:42.165874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.705 [2024-07-15 11:40:42.165890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.705 [2024-07-15 11:40:42.173769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.705 [2024-07-15 11:40:42.174003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.705 [2024-07-15 11:40:42.174028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.181495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.181764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.181780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.190553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.190813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.190830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.198818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.199044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.199060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.207389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.207630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.207646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.215291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.215638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.215654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.223327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.223621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.229812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.230054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.230071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.237132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.237400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.237416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.246339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.246644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.246661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.255036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.255267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.255284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.263635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.263902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.263919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.273438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.273788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.273805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.283978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.284356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.284374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.293779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.294085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.294102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.303983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.304216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.304232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.313772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.314109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.314129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.324237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.324628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.324644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.334236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.334617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.334634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.344480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.344668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.344683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.354531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.354721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.354736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.365107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.365497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.365515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.375385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.375600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.375619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.387216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.387541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.387558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.706 [2024-07-15 11:40:42.399041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.706 [2024-07-15 11:40:42.399310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.706 [2024-07-15 11:40:42.399327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.968 [2024-07-15 11:40:42.409532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.968 [2024-07-15 11:40:42.409816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.968 [2024-07-15 11:40:42.409833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.968 [2024-07-15 11:40:42.419132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.968 [2024-07-15 11:40:42.419451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.968 [2024-07-15 11:40:42.419467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.968 [2024-07-15 11:40:42.427736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.968 [2024-07-15 11:40:42.428002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.968 [2024-07-15 11:40:42.428018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.968 [2024-07-15 11:40:42.434914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.435132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.435147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.443498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.443771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.443787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.450556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.450981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.450998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.459501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.459767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.466417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.466740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.466757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.474457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.474633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.474650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.482277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.482684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.482701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.492276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.492558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.492575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.501946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.502201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.502217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.512503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.512750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.512765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.522681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.523028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.523045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.532937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.533143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.533159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.543203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.543516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.552817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.553028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.553043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.561792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.562199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.562215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.571166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.571473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.571489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.579407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.579624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.579640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.587383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.587629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.587646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.595833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.596176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.596192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.604515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.604762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.604780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.614344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.614609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.614628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.622962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.623313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.631867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.632211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.632229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.641421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.641825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.641842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.650117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.650317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.650333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.657081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.657273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.657290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.969 [2024-07-15 11:40:42.665596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:13.969 [2024-07-15 11:40:42.665894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.969 [2024-07-15 11:40:42.665912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.674105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.674541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.674558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.683644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.684069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.684086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.693710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.694103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.694121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.703689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.703997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.704013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.714836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.715132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.715149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.724900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.725209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.725226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.734630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.734999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.735016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.744577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.744925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.755023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.755472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.755490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.764881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.765113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.775273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.775535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.775553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.785749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.785979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.785995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.796074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.796558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.796574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.805888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.806129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.806146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.817137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.817459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.817476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.827072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.827437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.827453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.837089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.837302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.847148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.847473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.847489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.858300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.858523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.858539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.868647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.868861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.868880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.878219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.878398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.878414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.887558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.887825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.887842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.896886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.897199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.283 [2024-07-15 11:40:42.897215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.283 [2024-07-15 11:40:42.907230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.283 [2024-07-15 11:40:42.907579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.907596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.917329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.917591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.917608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.927175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.927374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.927389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.937618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.937893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.937909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.947768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.947956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.947972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.957343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.957747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.957764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.967384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.967762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.967778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.284 [2024-07-15 11:40:42.977679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.284 [2024-07-15 11:40:42.977900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.284 [2024-07-15 11:40:42.977916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:42.987394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:42.987773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:42.987790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:42.997757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:42.998096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:42.998113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.008145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.008354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.008370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.016262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.016580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.016596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.023778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.024019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.024036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.032046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.032399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.032420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.040877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.041207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.041224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.048975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.049202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.049218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.056741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.057033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.057049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.064666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.064861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.064877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.073330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.073701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.073719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.083016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.083325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.083342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.091707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.091991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.092008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.099311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.099633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.099650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.106563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.106787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.106803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.114152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.114495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.114512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.123564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.123930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.123946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.133254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.133466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.133481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.143056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.143395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.143412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.151922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.152277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.152294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.161350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.161637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.161653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.171132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.171426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.171442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.181395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.181699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.181716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.190713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.191112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.191133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.199751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.200065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.200082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.209960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.546 [2024-07-15 11:40:43.210356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.546 [2024-07-15 11:40:43.210373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.546 [2024-07-15 11:40:43.219491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.547 [2024-07-15 11:40:43.219742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.547 [2024-07-15 11:40:43.219759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.547 [2024-07-15 11:40:43.229561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.547 [2024-07-15 11:40:43.229861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.547 [2024-07-15 11:40:43.229878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.547 [2024-07-15 11:40:43.236638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.547 [2024-07-15 11:40:43.236831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.547 [2024-07-15 11:40:43.236847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.547 [2024-07-15 11:40:43.243256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.547 [2024-07-15 11:40:43.243503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.547 [2024-07-15 11:40:43.243518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.251930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.252258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.252276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.260150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.260328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.260346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.267851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.268051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.268067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.276061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.276378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.276395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.286339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.286546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.286562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.295544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.295866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.295883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.306199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.306635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.306652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.316761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.317128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.317145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.326430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.326776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.326793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.336779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.337180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.337196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.347789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.348182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.348198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.358979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.359361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.359378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.369068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.369427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.369444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.378189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.378322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.378338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.387933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.388391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.388407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.396907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.397181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.397197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.405606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.405968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.405985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.414349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.414525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.414541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.422546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.422876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.422892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.430238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.430591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.430608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.438758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.438934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.438950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.446698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.446950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.446967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.454081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.454376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.454393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.460735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.460958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.460974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.468289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.468500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.468516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.475321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.475579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.475597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.808 [2024-07-15 11:40:43.483779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.808 [2024-07-15 11:40:43.484244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.808 [2024-07-15 11:40:43.484262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.809 [2024-07-15 11:40:43.494048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.809 [2024-07-15 11:40:43.494479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.809 [2024-07-15 11:40:43.494499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.809 [2024-07-15 11:40:43.504391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:14.809 [2024-07-15 11:40:43.504823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.809 [2024-07-15 11:40:43.504840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:15.069 [2024-07-15 11:40:43.514441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7abc80) with pdu=0x2000190fef90 00:29:15.069 [2024-07-15 11:40:43.514690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.069 [2024-07-15 11:40:43.514707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:15.069 00:29:15.069 Latency(us) 00:29:15.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.069 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:15.069 nvme0n1 : 2.01 3266.26 408.28 0.00 0.00 4890.11 2826.24 22173.01 00:29:15.069 =================================================================================================================== 00:29:15.069 Total : 3266.26 408.28 0.00 0.00 4890.11 2826.24 22173.01 00:29:15.069 0 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:15.069 | .driver_specific 00:29:15.069 | .nvme_error 00:29:15.069 | .status_code 00:29:15.069 | .command_transient_transport_error' 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3725913 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3725913 ']' 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3725913 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3725913 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3725913' 00:29:15.069 killing process with pid 3725913 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3725913 00:29:15.069 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.069 00:29:15.069 Latency(us) 00:29:15.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.069 =================================================================================================================== 00:29:15.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.069 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3725913 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3723511 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3723511 ']' 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3723511 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3723511 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3723511' 00:29:15.330 killing process with pid 3723511 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3723511 00:29:15.330 11:40:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3723511 00:29:15.591 00:29:15.591 real 0m16.108s 00:29:15.591 user 0m31.665s 00:29:15.591 sys 0m3.188s 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.591 ************************************ 00:29:15.591 END TEST nvmf_digest_error 00:29:15.591 ************************************ 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:15.591 rmmod nvme_tcp 00:29:15.591 rmmod nvme_fabrics 00:29:15.591 rmmod nvme_keyring 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3723511 ']' 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3723511 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3723511 ']' 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3723511 00:29:15.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3723511) - No such process 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3723511 is not found' 00:29:15.591 Process with pid 3723511 is not found 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.591 11:40:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.139 11:40:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:18.139 00:29:18.139 real 0m42.234s 00:29:18.139 user 1m5.938s 00:29:18.139 sys 0m12.024s 00:29:18.139 11:40:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:18.139 11:40:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:18.139 ************************************ 00:29:18.139 END TEST nvmf_digest 00:29:18.139 ************************************ 00:29:18.139 11:40:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:18.139 11:40:46 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:18.139 11:40:46 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:18.139 11:40:46 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:18.139 11:40:46 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:18.139 11:40:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:18.139 11:40:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.139 11:40:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.139 ************************************ 00:29:18.139 START TEST nvmf_bdevperf 00:29:18.139 ************************************ 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:18.139 * Looking for test storage... 00:29:18.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:18.139 11:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:24.729 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:24.729 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:24.729 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.729 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:24.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:24.730 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:24.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:29:24.990 00:29:24.990 --- 10.0.0.2 ping statistics --- 00:29:24.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.990 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:29:24.990 00:29:24.990 --- 10.0.0.1 ping statistics --- 00:29:24.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.990 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3730651 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3730651 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3730651 ']' 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.990 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.991 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.991 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.991 11:40:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.991 [2024-07-15 11:40:53.597965] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:24.991 [2024-07-15 11:40:53.598025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.991 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.991 [2024-07-15 11:40:53.683379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.251 [2024-07-15 11:40:53.750898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.251 [2024-07-15 11:40:53.750938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.251 [2024-07-15 11:40:53.750946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.251 [2024-07-15 11:40:53.750952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.251 [2024-07-15 11:40:53.750958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.251 [2024-07-15 11:40:53.751101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.251 [2024-07-15 11:40:53.751260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.251 [2024-07-15 11:40:53.751371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 [2024-07-15 11:40:54.423480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 Malloc0 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 [2024-07-15 11:40:54.488452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:25.823 { 00:29:25.823 "params": { 00:29:25.823 "name": "Nvme$subsystem", 00:29:25.823 "trtype": "$TEST_TRANSPORT", 00:29:25.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.823 "adrfam": "ipv4", 00:29:25.823 "trsvcid": "$NVMF_PORT", 00:29:25.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.823 "hdgst": ${hdgst:-false}, 00:29:25.823 "ddgst": ${ddgst:-false} 00:29:25.823 }, 00:29:25.823 "method": "bdev_nvme_attach_controller" 00:29:25.823 } 00:29:25.823 EOF 00:29:25.823 )") 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:25.823 11:40:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:25.823 "params": { 00:29:25.823 "name": "Nvme1", 00:29:25.823 "trtype": "tcp", 00:29:25.823 "traddr": "10.0.0.2", 00:29:25.823 "adrfam": "ipv4", 00:29:25.823 "trsvcid": "4420", 00:29:25.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.823 "hdgst": false, 00:29:25.823 "ddgst": false 00:29:25.823 }, 00:29:25.823 "method": "bdev_nvme_attach_controller" 00:29:25.823 }' 00:29:26.083 [2024-07-15 11:40:54.541681] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:26.083 [2024-07-15 11:40:54.541733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730959 ] 00:29:26.083 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.083 [2024-07-15 11:40:54.599416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.083 [2024-07-15 11:40:54.663845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.344 Running I/O for 1 seconds... 00:29:27.284 00:29:27.284 Latency(us) 00:29:27.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.284 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:27.284 Verification LBA range: start 0x0 length 0x4000 00:29:27.285 Nvme1n1 : 1.01 8846.31 34.56 0.00 0.00 14405.51 3167.57 13871.79 00:29:27.285 =================================================================================================================== 00:29:27.285 Total : 8846.31 34.56 0.00 0.00 14405.51 3167.57 13871.79 00:29:27.544 11:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3731290 00:29:27.544 11:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:27.544 11:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:27.544 11:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:27.544 11:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:27.544 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:27.544 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:27.544 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:27.544 { 00:29:27.544 "params": { 00:29:27.544 "name": "Nvme$subsystem", 00:29:27.544 "trtype": "$TEST_TRANSPORT", 00:29:27.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.544 "adrfam": "ipv4", 00:29:27.544 "trsvcid": "$NVMF_PORT", 00:29:27.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.544 "hdgst": ${hdgst:-false}, 00:29:27.545 "ddgst": ${ddgst:-false} 00:29:27.545 }, 00:29:27.545 "method": "bdev_nvme_attach_controller" 00:29:27.545 } 00:29:27.545 EOF 00:29:27.545 )") 00:29:27.545 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:27.545 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:27.545 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:27.545 11:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:27.545 "params": { 00:29:27.545 "name": "Nvme1", 00:29:27.545 "trtype": "tcp", 00:29:27.545 "traddr": "10.0.0.2", 00:29:27.545 "adrfam": "ipv4", 00:29:27.545 "trsvcid": "4420", 00:29:27.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.545 "hdgst": false, 00:29:27.545 "ddgst": false 00:29:27.545 }, 00:29:27.545 "method": "bdev_nvme_attach_controller" 00:29:27.545 }' 00:29:27.545 [2024-07-15 11:40:56.045320] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:27.545 [2024-07-15 11:40:56.045380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731290 ] 00:29:27.545 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.545 [2024-07-15 11:40:56.104383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.545 [2024-07-15 11:40:56.167344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.804 Running I/O for 15 seconds... 00:29:30.344 11:40:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3730651 00:29:30.344 11:40:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:30.344 [2024-07-15 11:40:59.011568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.344 [2024-07-15 11:40:59.011851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.011982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.011991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.012008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.012025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.012044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.012062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.012081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.344 [2024-07-15 11:40:59.012098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.344 [2024-07-15 11:40:59.012108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.345 [2024-07-15 11:40:59.012901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.345 [2024-07-15 11:40:59.012910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.012917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.012927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.012934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.012944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.012951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.012960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.012967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.012984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.012993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.346 [2024-07-15 11:40:59.013627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.346 [2024-07-15 11:40:59.013636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.347 [2024-07-15 11:40:59.013912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02a00 is same with the state(5) to be set 00:29:30.347 [2024-07-15 11:40:59.013932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:30.347 [2024-07-15 11:40:59.013938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:30.347 [2024-07-15 11:40:59.013944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:29:30.347 [2024-07-15 11:40:59.013952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.347 [2024-07-15 11:40:59.013992] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe02a00 was disconnected and freed. reset controller. 00:29:30.347 [2024-07-15 11:40:59.017530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.347 [2024-07-15 11:40:59.017578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.347 [2024-07-15 11:40:59.018480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.347 [2024-07-15 11:40:59.018517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.347 [2024-07-15 11:40:59.018528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.347 [2024-07-15 11:40:59.018767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.347 [2024-07-15 11:40:59.018988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.347 [2024-07-15 11:40:59.018999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.347 [2024-07-15 11:40:59.019007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.347 [2024-07-15 11:40:59.022513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.347 [2024-07-15 11:40:59.031590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.347 [2024-07-15 11:40:59.032232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.347 [2024-07-15 11:40:59.032270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.347 [2024-07-15 11:40:59.032281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.347 [2024-07-15 11:40:59.032517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.347 [2024-07-15 11:40:59.032737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.347 [2024-07-15 11:40:59.032746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.347 [2024-07-15 11:40:59.032754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.347 [2024-07-15 11:40:59.036262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.045340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.045987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.046005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.046013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.046236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.046454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.046472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.046479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.049985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.059267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.059907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.059923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.059931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.060152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.060369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.060378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.060385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.063909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.073196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.073737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.073754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.073762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.073978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.074201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.074210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.074217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.077709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.086981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.087629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.087645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.087653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.087869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.088085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.088095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.088102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.091601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.100874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.101571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.101609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.101620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.101855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.102075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.102085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.102092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.105811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.114685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.115388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.115427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.115437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.115673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.115893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.115903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.115911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.119421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.128503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.129113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.129137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.129146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.129362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.129579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.609 [2024-07-15 11:40:59.129588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.609 [2024-07-15 11:40:59.129594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.609 [2024-07-15 11:40:59.133089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.609 [2024-07-15 11:40:59.142366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.609 [2024-07-15 11:40:59.142814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.609 [2024-07-15 11:40:59.142833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.609 [2024-07-15 11:40:59.142841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.609 [2024-07-15 11:40:59.143062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.609 [2024-07-15 11:40:59.143291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.143302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.143309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.146801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.156292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.156926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.156942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.156950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.157172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.157389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.157398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.157405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.160894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.170169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.170798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.170814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.170821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.171037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.171261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.171270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.171277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.174768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.184040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.184761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.184799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.184810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.185046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.185277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.185288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.185300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.188797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.197869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.198485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.198504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.198512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.198729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.198945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.198955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.198962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.202459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.211731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.212430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.212468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.212479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.212715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.212936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.212946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.212953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.216454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.225516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.226217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.226255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.226267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.226506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.226726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.226736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.226744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.230246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.239307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.239958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.239981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.239989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.240212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.240429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.240438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.240446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.243934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.253210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.253855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.253871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.253880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.254097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.254319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.610 [2024-07-15 11:40:59.254329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.610 [2024-07-15 11:40:59.254336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.610 [2024-07-15 11:40:59.257823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.610 [2024-07-15 11:40:59.267086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.610 [2024-07-15 11:40:59.267825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.610 [2024-07-15 11:40:59.267864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.610 [2024-07-15 11:40:59.267874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.610 [2024-07-15 11:40:59.268110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.610 [2024-07-15 11:40:59.268338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.611 [2024-07-15 11:40:59.268349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.611 [2024-07-15 11:40:59.268356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.611 [2024-07-15 11:40:59.271850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.611 [2024-07-15 11:40:59.280946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.611 [2024-07-15 11:40:59.281602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-07-15 11:40:59.281622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-07-15 11:40:59.281630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.611 [2024-07-15 11:40:59.281847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.611 [2024-07-15 11:40:59.282069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.611 [2024-07-15 11:40:59.282078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.611 [2024-07-15 11:40:59.282085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.611 [2024-07-15 11:40:59.285581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.611 [2024-07-15 11:40:59.294842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.611 [2024-07-15 11:40:59.295446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-07-15 11:40:59.295463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-07-15 11:40:59.295471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.611 [2024-07-15 11:40:59.295687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.611 [2024-07-15 11:40:59.295904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.611 [2024-07-15 11:40:59.295913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.611 [2024-07-15 11:40:59.295920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.611 [2024-07-15 11:40:59.299410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.611 [2024-07-15 11:40:59.308669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.611 [2024-07-15 11:40:59.309121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-07-15 11:40:59.309145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-07-15 11:40:59.309153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.309372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.309590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.309601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.309607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.313101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.322568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.323327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.323366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.323377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.323613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.323833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.323843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.323851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.327362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.336428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.337040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.337059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.337067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.337289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.337507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.337516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.337523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.341011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.350287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.350985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.351023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.351033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.351277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.351498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.351508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.351515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.355008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.364068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.364720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.364740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.364748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.364965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.365187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.365196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.365203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.368692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.377955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.378570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.378586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.378599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.378816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.379032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.379041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.379048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.382542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.391805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.392483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.392522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.392533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.392769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.392989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.392998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.393006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.396507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.405568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.406234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.406272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.406284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.406523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.406743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.406752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.406760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.410261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.419322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.419970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.419988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.419996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.420219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.420438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.420452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.873 [2024-07-15 11:40:59.420459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.873 [2024-07-15 11:40:59.423949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.873 [2024-07-15 11:40:59.433216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.873 [2024-07-15 11:40:59.433783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-07-15 11:40:59.433820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-07-15 11:40:59.433831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.873 [2024-07-15 11:40:59.434067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.873 [2024-07-15 11:40:59.434294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.873 [2024-07-15 11:40:59.434305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.434313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.437807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.447073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.447726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.447746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.447754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.447971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.448193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.448203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.448210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.451712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.460978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.461623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.461640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.461648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.461864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.462080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.462089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.462096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.465685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.474747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.475451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.475489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.475500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.475736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.475956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.475966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.475973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.479473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.488561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.489314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.489352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.489362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.489599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.489820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.489829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.489836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.493335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.502392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.503160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.503198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.503209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.503444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.503664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.503674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.503681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.507186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.516241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.516986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.517023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.517034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.517284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.517506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.517517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.517524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.521021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.530090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.530823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.530861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.530871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.531108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.531337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.531348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.531356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.534848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.543902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.544608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.544646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.544657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.544893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.545114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.545133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.545141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.548634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.557717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.558353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.558373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.558381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.558598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.558814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.558823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.558835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.874 [2024-07-15 11:40:59.562335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.874 [2024-07-15 11:40:59.571607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.874 [2024-07-15 11:40:59.572200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-07-15 11:40:59.572218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-07-15 11:40:59.572226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:30.874 [2024-07-15 11:40:59.572442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:30.874 [2024-07-15 11:40:59.572659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.874 [2024-07-15 11:40:59.572668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.874 [2024-07-15 11:40:59.572675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.576171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.585444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.586034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.586050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.586058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.586279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.586496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.586505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.586512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.590005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.599281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.600017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.600055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.600066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.600311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.600532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.600541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.600549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.604043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.613110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.613726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.613750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.613758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.613975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.614199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.614208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.614215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.617709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.626979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.627714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.627753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.627764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.628000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.628228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.628238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.628245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.631744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.640815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.641438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.641458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.641466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.641683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.641899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.641909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.641916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.645414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.654689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.655410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.655447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.655458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.655694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.655918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.655928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.655936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.659439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.668496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.669181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.669220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.669232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.669471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.669691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.669701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.669708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.673213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.136 [2024-07-15 11:40:59.682270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.136 [2024-07-15 11:40:59.682992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.136 [2024-07-15 11:40:59.683030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.136 [2024-07-15 11:40:59.683041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.136 [2024-07-15 11:40:59.683286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.136 [2024-07-15 11:40:59.683507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.136 [2024-07-15 11:40:59.683517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.136 [2024-07-15 11:40:59.683524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.136 [2024-07-15 11:40:59.687016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.696104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.696855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.696894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.696904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.697149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.697369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.697380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.697387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.700886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.709948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.710612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.710649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.710660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.710896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.711115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.711134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.711142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.714637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.723697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.724440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.724478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.724489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.724725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.724944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.724954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.724961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.728463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.737520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.738222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.738260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.738271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.738507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.738728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.738737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.738744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.742245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.751300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.752005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.752043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.752063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.752318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.752539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.752549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.752556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.756049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.765108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.765859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.765897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.765907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.766151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.766372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.766381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.766389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.769882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.778942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.779686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.779724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.779735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.779971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.780199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.780209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.780217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.783714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.792773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.793497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.793535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.793545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.793781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.794001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.794016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.794023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.797527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.806587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.807379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.807417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.807429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.807664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.807885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.807895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.137 [2024-07-15 11:40:59.807902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.137 [2024-07-15 11:40:59.811406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.137 [2024-07-15 11:40:59.820463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.137 [2024-07-15 11:40:59.821202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.137 [2024-07-15 11:40:59.821240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.137 [2024-07-15 11:40:59.821252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.137 [2024-07-15 11:40:59.821489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.137 [2024-07-15 11:40:59.821709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.137 [2024-07-15 11:40:59.821719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.138 [2024-07-15 11:40:59.821726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.138 [2024-07-15 11:40:59.825229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.138 [2024-07-15 11:40:59.834288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.138 [2024-07-15 11:40:59.834986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.138 [2024-07-15 11:40:59.835023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.138 [2024-07-15 11:40:59.835034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.138 [2024-07-15 11:40:59.835278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.138 [2024-07-15 11:40:59.835499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.138 [2024-07-15 11:40:59.835508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.138 [2024-07-15 11:40:59.835516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.399 [2024-07-15 11:40:59.839011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.399 [2024-07-15 11:40:59.848080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.399 [2024-07-15 11:40:59.848831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:40:59.848869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.399 [2024-07-15 11:40:59.848880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.399 [2024-07-15 11:40:59.849116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.399 [2024-07-15 11:40:59.849344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.399 [2024-07-15 11:40:59.849354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.399 [2024-07-15 11:40:59.849362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.399 [2024-07-15 11:40:59.852865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.399 [2024-07-15 11:40:59.861929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.399 [2024-07-15 11:40:59.862619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:40:59.862658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.399 [2024-07-15 11:40:59.862669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.399 [2024-07-15 11:40:59.862905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.399 [2024-07-15 11:40:59.863133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.399 [2024-07-15 11:40:59.863143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.399 [2024-07-15 11:40:59.863151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.399 [2024-07-15 11:40:59.866645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.399 [2024-07-15 11:40:59.875706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.399 [2024-07-15 11:40:59.876440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:40:59.876478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.399 [2024-07-15 11:40:59.876489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.399 [2024-07-15 11:40:59.876726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.399 [2024-07-15 11:40:59.876946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.399 [2024-07-15 11:40:59.876956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.399 [2024-07-15 11:40:59.876964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.399 [2024-07-15 11:40:59.880467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.399 [2024-07-15 11:40:59.889527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.399 [2024-07-15 11:40:59.890223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:40:59.890261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.399 [2024-07-15 11:40:59.890273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.399 [2024-07-15 11:40:59.890516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.399 [2024-07-15 11:40:59.890736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.890745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.890753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.894255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.903350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.904003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.904021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.904029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.904254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.904471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.904480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.904487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.907975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.917237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.917969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.918007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.918018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.918263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.918484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.918494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.918501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.921992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.931054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.931784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.931822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.931833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.932069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.932299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.932309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.932321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.935815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.944874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.945574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.945612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.945623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.945858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.946079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.946089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.946097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.949599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.958666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.959418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.959456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.959467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.959703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.959923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.959933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.959941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.963444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.972504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.973202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.973240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.973253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.973490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.973710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.973719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.973727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.977230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:40:59.986290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:40:59.987037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:40:59.987079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:40:59.987091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:40:59.987337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:40:59.987558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:40:59.987568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:40:59.987575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:40:59.991068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:41:00.000130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:41:00.000876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:41:00.000914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:41:00.000926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:41:00.001171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:41:00.001393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:41:00.001403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:41:00.001410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:41:00.005375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:41:00.014028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:41:00.014726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:41:00.014765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:41:00.014778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:41:00.015016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:41:00.015245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:41:00.015255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:41:00.015264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:41:00.018765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:41:00.027831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:41:00.028542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:41:00.028580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:41:00.028592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:41:00.028828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.400 [2024-07-15 11:41:00.029053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.400 [2024-07-15 11:41:00.029062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.400 [2024-07-15 11:41:00.029070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.400 [2024-07-15 11:41:00.032574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.400 [2024-07-15 11:41:00.041630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.400 [2024-07-15 11:41:00.042394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:41:00.042433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.400 [2024-07-15 11:41:00.042444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.400 [2024-07-15 11:41:00.042681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.401 [2024-07-15 11:41:00.042901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.401 [2024-07-15 11:41:00.042911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.401 [2024-07-15 11:41:00.042919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.401 [2024-07-15 11:41:00.046420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.401 [2024-07-15 11:41:00.055381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.401 [2024-07-15 11:41:00.056088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:41:00.056133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.401 [2024-07-15 11:41:00.056145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.401 [2024-07-15 11:41:00.056381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.401 [2024-07-15 11:41:00.056602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.401 [2024-07-15 11:41:00.056612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.401 [2024-07-15 11:41:00.056620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.401 [2024-07-15 11:41:00.060115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.401 [2024-07-15 11:41:00.069179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.401 [2024-07-15 11:41:00.069926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:41:00.069963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.401 [2024-07-15 11:41:00.069974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.401 [2024-07-15 11:41:00.070219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.401 [2024-07-15 11:41:00.070440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.401 [2024-07-15 11:41:00.070450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.401 [2024-07-15 11:41:00.070457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.401 [2024-07-15 11:41:00.073958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.401 [2024-07-15 11:41:00.083049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.401 [2024-07-15 11:41:00.083638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:41:00.083676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.401 [2024-07-15 11:41:00.083687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.401 [2024-07-15 11:41:00.083922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.401 [2024-07-15 11:41:00.084152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.401 [2024-07-15 11:41:00.084162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.401 [2024-07-15 11:41:00.084169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.401 [2024-07-15 11:41:00.087663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.401 [2024-07-15 11:41:00.096925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.401 [2024-07-15 11:41:00.097544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:41:00.097582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.401 [2024-07-15 11:41:00.097594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.401 [2024-07-15 11:41:00.097830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.401 [2024-07-15 11:41:00.098051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.401 [2024-07-15 11:41:00.098061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.401 [2024-07-15 11:41:00.098068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.101571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.110722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.111421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.111459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.111470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.111705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.111926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.111935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.111943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.115444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.124502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.125223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.125261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.125278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.125516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.125736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.125746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.125753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.129259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.138318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.139028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.139066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.139076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.139321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.139542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.139552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.139560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.143052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.152108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.152820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.152859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.152870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.153105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.153336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.153346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.153354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.156846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.165903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.166628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.166667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.166677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.166913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.167144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.167159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.167167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.170662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.179721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.180430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.180468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.180479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.180714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.180935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.180944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.180951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.184455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.193513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.194222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.194260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.194272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.194511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.194731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.194740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.194748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.198250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.207306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.208017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.208055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.208066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.208311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.208532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.208541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.208548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.212043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.221107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.221851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.221889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.221900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.222145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.222366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.222376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.222384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.225877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.234934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.235629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.235667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.235678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.235914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.663 [2024-07-15 11:41:00.236143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.663 [2024-07-15 11:41:00.236153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.663 [2024-07-15 11:41:00.236161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.663 [2024-07-15 11:41:00.239655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.663 [2024-07-15 11:41:00.248715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.663 [2024-07-15 11:41:00.249210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.663 [2024-07-15 11:41:00.249234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.663 [2024-07-15 11:41:00.249243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.663 [2024-07-15 11:41:00.249463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.249680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.249690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.249697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.253207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.262465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.263200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.263245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.263255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.263496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.263716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.263726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.263734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.267238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.276296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.276946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.276965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.276973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.277196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.277413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.277423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.277430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.280918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.290180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.290820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.290836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.290844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.291060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.291281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.291291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.291298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.294783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.304041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.304773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.304811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.304821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.305057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.305288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.305299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.305310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.308809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.317895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.318481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.318519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.318530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.318766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.318986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.318995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.319003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.322504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.331767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.332506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.332544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.332555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.332791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.333011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.333023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.333031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.336535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.345593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.346224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.346262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.346274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.346514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.346734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.346743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.346751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.664 [2024-07-15 11:41:00.350257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.664 [2024-07-15 11:41:00.359325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.664 [2024-07-15 11:41:00.360039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.664 [2024-07-15 11:41:00.360077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.664 [2024-07-15 11:41:00.360088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.664 [2024-07-15 11:41:00.360333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.664 [2024-07-15 11:41:00.360554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.664 [2024-07-15 11:41:00.360563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.664 [2024-07-15 11:41:00.360571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.927 [2024-07-15 11:41:00.364065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.927 [2024-07-15 11:41:00.373130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.927 [2024-07-15 11:41:00.373860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-07-15 11:41:00.373897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.927 [2024-07-15 11:41:00.373908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.927 [2024-07-15 11:41:00.374153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.927 [2024-07-15 11:41:00.374374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.927 [2024-07-15 11:41:00.374385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.927 [2024-07-15 11:41:00.374392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.927 [2024-07-15 11:41:00.377884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.927 [2024-07-15 11:41:00.386937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.927 [2024-07-15 11:41:00.387553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-07-15 11:41:00.387572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.927 [2024-07-15 11:41:00.387580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.927 [2024-07-15 11:41:00.387796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.927 [2024-07-15 11:41:00.388012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.927 [2024-07-15 11:41:00.388021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.927 [2024-07-15 11:41:00.388028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.927 [2024-07-15 11:41:00.391521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.927 [2024-07-15 11:41:00.400778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.927 [2024-07-15 11:41:00.401562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.927 [2024-07-15 11:41:00.401600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.927 [2024-07-15 11:41:00.401610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.927 [2024-07-15 11:41:00.401846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.927 [2024-07-15 11:41:00.402071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.402081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.402088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.405590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.414645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.415390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.415428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.415439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.415675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.415896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.415905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.415913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.419416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.428516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.429221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.429261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.429273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.429511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.429732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.429742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.429749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.433254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.442318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.443001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.443039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.443049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.443294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.443515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.443526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.443535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.447038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.456112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.456779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.456798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.456806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.457023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.457246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.457255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.457263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.460751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.470013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.470736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.470775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.470785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.471022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.471249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.471267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.471275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.474768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.483831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.484463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.484484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.484492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.484709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.484925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.484934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.484941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.488434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.497694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.498386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.498424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.498440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.498676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.498896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.498906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.498914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.502415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.511474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.512176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.512215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.512227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.512464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.512685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.512695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.512702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.516207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.525296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.526022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.526060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.526072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.526316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.526537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.526547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.526555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.530047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.539106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.539760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.539780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.928 [2024-07-15 11:41:00.539788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.928 [2024-07-15 11:41:00.540008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.928 [2024-07-15 11:41:00.540231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.928 [2024-07-15 11:41:00.540245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.928 [2024-07-15 11:41:00.540252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.928 [2024-07-15 11:41:00.543744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.928 [2024-07-15 11:41:00.553009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.928 [2024-07-15 11:41:00.553754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.928 [2024-07-15 11:41:00.553792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.929 [2024-07-15 11:41:00.553803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.929 [2024-07-15 11:41:00.554039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.929 [2024-07-15 11:41:00.554267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.929 [2024-07-15 11:41:00.554278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.929 [2024-07-15 11:41:00.554285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.929 [2024-07-15 11:41:00.557780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.929 [2024-07-15 11:41:00.566842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.929 [2024-07-15 11:41:00.567568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-07-15 11:41:00.567607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.929 [2024-07-15 11:41:00.567618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.929 [2024-07-15 11:41:00.567854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.929 [2024-07-15 11:41:00.568074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.929 [2024-07-15 11:41:00.568083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.929 [2024-07-15 11:41:00.568091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.929 [2024-07-15 11:41:00.571593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.929 [2024-07-15 11:41:00.580675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.929 [2024-07-15 11:41:00.581414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-07-15 11:41:00.581453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.929 [2024-07-15 11:41:00.581464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.929 [2024-07-15 11:41:00.581700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.929 [2024-07-15 11:41:00.581920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.929 [2024-07-15 11:41:00.581930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.929 [2024-07-15 11:41:00.581937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.929 [2024-07-15 11:41:00.585449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.929 [2024-07-15 11:41:00.594517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.929 [2024-07-15 11:41:00.595128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-07-15 11:41:00.595148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.929 [2024-07-15 11:41:00.595156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.929 [2024-07-15 11:41:00.595373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.929 [2024-07-15 11:41:00.595590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.929 [2024-07-15 11:41:00.595598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.929 [2024-07-15 11:41:00.595605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.929 [2024-07-15 11:41:00.599094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.929 [2024-07-15 11:41:00.608361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.929 [2024-07-15 11:41:00.609091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-07-15 11:41:00.609137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.929 [2024-07-15 11:41:00.609150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.929 [2024-07-15 11:41:00.609387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.929 [2024-07-15 11:41:00.609607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.929 [2024-07-15 11:41:00.609617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.929 [2024-07-15 11:41:00.609625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.929 [2024-07-15 11:41:00.613119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.929 [2024-07-15 11:41:00.622188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.929 [2024-07-15 11:41:00.622834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.929 [2024-07-15 11:41:00.622853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:31.929 [2024-07-15 11:41:00.622861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:31.929 [2024-07-15 11:41:00.623078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:31.929 [2024-07-15 11:41:00.623300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.929 [2024-07-15 11:41:00.623310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.929 [2024-07-15 11:41:00.623318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.929 [2024-07-15 11:41:00.626810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.190 [2024-07-15 11:41:00.636074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.190 [2024-07-15 11:41:00.636700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.190 [2024-07-15 11:41:00.636716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.190 [2024-07-15 11:41:00.636724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.190 [2024-07-15 11:41:00.636945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.190 [2024-07-15 11:41:00.637166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.190 [2024-07-15 11:41:00.637176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.190 [2024-07-15 11:41:00.637184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.190 [2024-07-15 11:41:00.640673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.190 [2024-07-15 11:41:00.649935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.190 [2024-07-15 11:41:00.650732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.190 [2024-07-15 11:41:00.650770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.190 [2024-07-15 11:41:00.650781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.190 [2024-07-15 11:41:00.651017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.190 [2024-07-15 11:41:00.651246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.190 [2024-07-15 11:41:00.651257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.190 [2024-07-15 11:41:00.651264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.190 [2024-07-15 11:41:00.654772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.663837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.664483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.664521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.664532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.664768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.664988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.664998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.665005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.668507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.677776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.679096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.679121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.679138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.679362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.679581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.679591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.679601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.683097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.691541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.692176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.692194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.692202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.692418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.692634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.692643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.692650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.696204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.705476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.706103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.706118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.706131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.706348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.706564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.706574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.706581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.710067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.719335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.720070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.720109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.720119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.720361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.720582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.720592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.720599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.724095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.733193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.733861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.733888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.734105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.734328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.734338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.734345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.737838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.747102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.747837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.747875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.747886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.748131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.748352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.748363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.748371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.751868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.760944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.762221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.762254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.762265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.762501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.762722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.762733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.762740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.766240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.774685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.775301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.775321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.775329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.775546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.775768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.775777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.775784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.779279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.788559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.789171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.789196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.789205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.789425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.789643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.191 [2024-07-15 11:41:00.789652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.191 [2024-07-15 11:41:00.789659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.191 [2024-07-15 11:41:00.793157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.191 [2024-07-15 11:41:00.802421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.191 [2024-07-15 11:41:00.803152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.191 [2024-07-15 11:41:00.803190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.191 [2024-07-15 11:41:00.803202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.191 [2024-07-15 11:41:00.803441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.191 [2024-07-15 11:41:00.803662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.803671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.803679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.807176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.192 [2024-07-15 11:41:00.816237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.192 [2024-07-15 11:41:00.816881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.192 [2024-07-15 11:41:00.816900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.192 [2024-07-15 11:41:00.816908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.192 [2024-07-15 11:41:00.817131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.192 [2024-07-15 11:41:00.817349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.817358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.817365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.820858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.192 [2024-07-15 11:41:00.830119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.192 [2024-07-15 11:41:00.830762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.192 [2024-07-15 11:41:00.830778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.192 [2024-07-15 11:41:00.830786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.192 [2024-07-15 11:41:00.831001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.192 [2024-07-15 11:41:00.831224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.831234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.831241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.834726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.192 [2024-07-15 11:41:00.843983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.192 [2024-07-15 11:41:00.844545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.192 [2024-07-15 11:41:00.844583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.192 [2024-07-15 11:41:00.844594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.192 [2024-07-15 11:41:00.844830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.192 [2024-07-15 11:41:00.845051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.845061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.845069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.848572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.192 [2024-07-15 11:41:00.857855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.192 [2024-07-15 11:41:00.858609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.192 [2024-07-15 11:41:00.858647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.192 [2024-07-15 11:41:00.858658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.192 [2024-07-15 11:41:00.858894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.192 [2024-07-15 11:41:00.859114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.859131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.859138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.862637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.192 [2024-07-15 11:41:00.871712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.192 [2024-07-15 11:41:00.872462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.192 [2024-07-15 11:41:00.872500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.192 [2024-07-15 11:41:00.872516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.192 [2024-07-15 11:41:00.872751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.192 [2024-07-15 11:41:00.872971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.872981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.872988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.876490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.192 [2024-07-15 11:41:00.885555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.192 [2024-07-15 11:41:00.886249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.192 [2024-07-15 11:41:00.886287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.192 [2024-07-15 11:41:00.886300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.192 [2024-07-15 11:41:00.886537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.192 [2024-07-15 11:41:00.886757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.192 [2024-07-15 11:41:00.886767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.192 [2024-07-15 11:41:00.886775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.192 [2024-07-15 11:41:00.890277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.453 [2024-07-15 11:41:00.899337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.453 [2024-07-15 11:41:00.899992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.453 [2024-07-15 11:41:00.900011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.453 [2024-07-15 11:41:00.900019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.453 [2024-07-15 11:41:00.900241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.453 [2024-07-15 11:41:00.900459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.453 [2024-07-15 11:41:00.900468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.453 [2024-07-15 11:41:00.900476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.453 [2024-07-15 11:41:00.903966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.453 [2024-07-15 11:41:00.913229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.453 [2024-07-15 11:41:00.913931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.453 [2024-07-15 11:41:00.913969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.453 [2024-07-15 11:41:00.913979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.453 [2024-07-15 11:41:00.914224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.453 [2024-07-15 11:41:00.914445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.453 [2024-07-15 11:41:00.914459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.453 [2024-07-15 11:41:00.914467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.453 [2024-07-15 11:41:00.917960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.453 [2024-07-15 11:41:00.927023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.453 [2024-07-15 11:41:00.927741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.453 [2024-07-15 11:41:00.927779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.453 [2024-07-15 11:41:00.927792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.453 [2024-07-15 11:41:00.928029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.453 [2024-07-15 11:41:00.928256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.453 [2024-07-15 11:41:00.928266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.453 [2024-07-15 11:41:00.928274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.453 [2024-07-15 11:41:00.931769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.453 [2024-07-15 11:41:00.940863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.453 [2024-07-15 11:41:00.941589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:00.941627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:00.941638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:00.941874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:00.942094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:00.942104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:00.942112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:00.945612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:00.954686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:00.955434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:00.955472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:00.955482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:00.955718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:00.955938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:00.955948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:00.955956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:00.959462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:00.968530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:00.969176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:00.969214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:00.969227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:00.969466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:00.969686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:00.969696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:00.969703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:00.973204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:00.982472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:00.983226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:00.983265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:00.983277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:00.983514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:00.983734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:00.983744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:00.983751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:00.987253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:00.996318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:00.996972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:00.996991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:00.996999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:00.997222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:00.997439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:00.997448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:00.997455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.000945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.010210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.010933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.010970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.010981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.011229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.011450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.011460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.011468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.014962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.024029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.024655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.024674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.024682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.024899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.025115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.025130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.025137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.028628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.037887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.038511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.038527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.038535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.038751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.038967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.038975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.038982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.042473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.051733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.052384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.052422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.052434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.052672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.052891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.052900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.052911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.056430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.065494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.066269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.066306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.066318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.066558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.066777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.066788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.066795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.070298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.079357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.079886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.079903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.079911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.080134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.080351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.080359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.080366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.083854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.093214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.093831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.093868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.093881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.094118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.094346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.094355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.094363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.097854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.107133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.107799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.107818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.107826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.108043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.108265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.108275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.108282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.111770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.121036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.121732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.121770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.121780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.122016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.122243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.122252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.122260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.125752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.134811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.135539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.135576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.135587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.135822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.136042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.136050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.136058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.139558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.454 [2024-07-15 11:41:01.148649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.454 [2024-07-15 11:41:01.149362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.454 [2024-07-15 11:41:01.149398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.454 [2024-07-15 11:41:01.149409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.454 [2024-07-15 11:41:01.149645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.454 [2024-07-15 11:41:01.149869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.454 [2024-07-15 11:41:01.149877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.454 [2024-07-15 11:41:01.149885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.454 [2024-07-15 11:41:01.153394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.716 [2024-07-15 11:41:01.162461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.716 [2024-07-15 11:41:01.163066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 11:41:01.163085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.716 [2024-07-15 11:41:01.163093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.716 [2024-07-15 11:41:01.163315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.716 [2024-07-15 11:41:01.163532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.716 [2024-07-15 11:41:01.163540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.716 [2024-07-15 11:41:01.163547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.716 [2024-07-15 11:41:01.167033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.716 [2024-07-15 11:41:01.176306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.716 [2024-07-15 11:41:01.176904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 11:41:01.176919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.716 [2024-07-15 11:41:01.176927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.716 [2024-07-15 11:41:01.177150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.716 [2024-07-15 11:41:01.177367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.716 [2024-07-15 11:41:01.177375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.716 [2024-07-15 11:41:01.177382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.716 [2024-07-15 11:41:01.180877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.716 [2024-07-15 11:41:01.190157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.716 [2024-07-15 11:41:01.190908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.716 [2024-07-15 11:41:01.190945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.190956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.191201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.191421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.191430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.191438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.194941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.204019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.204639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.204657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.204665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.204882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.205097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.205105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.205112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.208613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.217890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.218500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.218516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.218524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.218739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.218955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.218963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.218969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.222466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.231739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.232432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.232469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.232479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.232716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.232935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.232943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.232951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.236446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.245501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.246146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.246168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.246180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.246399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.246615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.246623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.246629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.250126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.259403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.260085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.260131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.260143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.260378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.260598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.260606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.260614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.264109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.273165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.273840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.273877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.273887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.274132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.274353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.274361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.274368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.277863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.286915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.287661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.287698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.287709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.287945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.288174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.288187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.288195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.291687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.300737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.301448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.301486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.301496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.301731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.301951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.301960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.301967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.305465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.314522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.315266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.315302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.315313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.315548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.315768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.315777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.315785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.319282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.328337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.328984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.717 [2024-07-15 11:41:01.329002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.717 [2024-07-15 11:41:01.329009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.717 [2024-07-15 11:41:01.329233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.717 [2024-07-15 11:41:01.329449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.717 [2024-07-15 11:41:01.329457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.717 [2024-07-15 11:41:01.329464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.717 [2024-07-15 11:41:01.332950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.717 [2024-07-15 11:41:01.342210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.717 [2024-07-15 11:41:01.342938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 11:41:01.342975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.718 [2024-07-15 11:41:01.342985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.718 [2024-07-15 11:41:01.343231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.718 [2024-07-15 11:41:01.343451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.718 [2024-07-15 11:41:01.343460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.718 [2024-07-15 11:41:01.343467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.718 [2024-07-15 11:41:01.346960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.718 [2024-07-15 11:41:01.356056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.718 [2024-07-15 11:41:01.356616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 11:41:01.356652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.718 [2024-07-15 11:41:01.356662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.718 [2024-07-15 11:41:01.356898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.718 [2024-07-15 11:41:01.357118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.718 [2024-07-15 11:41:01.357140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.718 [2024-07-15 11:41:01.357148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.718 [2024-07-15 11:41:01.360641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.718 [2024-07-15 11:41:01.369898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.718 [2024-07-15 11:41:01.370538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 11:41:01.370575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.718 [2024-07-15 11:41:01.370585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.718 [2024-07-15 11:41:01.370821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.718 [2024-07-15 11:41:01.371040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.718 [2024-07-15 11:41:01.371049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.718 [2024-07-15 11:41:01.371056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.718 [2024-07-15 11:41:01.374557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.718 [2024-07-15 11:41:01.383816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.718 [2024-07-15 11:41:01.384528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 11:41:01.384564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.718 [2024-07-15 11:41:01.384575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.718 [2024-07-15 11:41:01.384815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.718 [2024-07-15 11:41:01.385035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.718 [2024-07-15 11:41:01.385043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.718 [2024-07-15 11:41:01.385051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.718 [2024-07-15 11:41:01.388551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.718 [2024-07-15 11:41:01.397608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.718 [2024-07-15 11:41:01.398358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 11:41:01.398396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.718 [2024-07-15 11:41:01.398406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.718 [2024-07-15 11:41:01.398642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.718 [2024-07-15 11:41:01.398861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.718 [2024-07-15 11:41:01.398870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.718 [2024-07-15 11:41:01.398877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.718 [2024-07-15 11:41:01.402379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.718 [2024-07-15 11:41:01.411433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.718 [2024-07-15 11:41:01.412084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.718 [2024-07-15 11:41:01.412103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.718 [2024-07-15 11:41:01.412110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.718 [2024-07-15 11:41:01.412334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.718 [2024-07-15 11:41:01.412551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.718 [2024-07-15 11:41:01.412559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.718 [2024-07-15 11:41:01.412566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.718 [2024-07-15 11:41:01.416056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.980 [2024-07-15 11:41:01.425316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.980 [2024-07-15 11:41:01.425915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.980 [2024-07-15 11:41:01.425931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.980 [2024-07-15 11:41:01.425939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.980 [2024-07-15 11:41:01.426161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.980 [2024-07-15 11:41:01.426377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.980 [2024-07-15 11:41:01.426386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.980 [2024-07-15 11:41:01.426401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.980 [2024-07-15 11:41:01.429890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.980 [2024-07-15 11:41:01.439148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.980 [2024-07-15 11:41:01.439791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.980 [2024-07-15 11:41:01.439827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.980 [2024-07-15 11:41:01.439837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.980 [2024-07-15 11:41:01.440073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.980 [2024-07-15 11:41:01.440302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.980 [2024-07-15 11:41:01.440312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.980 [2024-07-15 11:41:01.440320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.980 [2024-07-15 11:41:01.443812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.980 [2024-07-15 11:41:01.452962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.980 [2024-07-15 11:41:01.453688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.980 [2024-07-15 11:41:01.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.980 [2024-07-15 11:41:01.453735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.980 [2024-07-15 11:41:01.453971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.980 [2024-07-15 11:41:01.454201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.980 [2024-07-15 11:41:01.454210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.980 [2024-07-15 11:41:01.454217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.980 [2024-07-15 11:41:01.457712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.980 [2024-07-15 11:41:01.466773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.980 [2024-07-15 11:41:01.467498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.980 [2024-07-15 11:41:01.467535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.980 [2024-07-15 11:41:01.467546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.980 [2024-07-15 11:41:01.467782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.980 [2024-07-15 11:41:01.468002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.980 [2024-07-15 11:41:01.468010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.980 [2024-07-15 11:41:01.468017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.980 [2024-07-15 11:41:01.471518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.980 [2024-07-15 11:41:01.480578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.980 [2024-07-15 11:41:01.481340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.980 [2024-07-15 11:41:01.481377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.980 [2024-07-15 11:41:01.481387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.980 [2024-07-15 11:41:01.481623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.980 [2024-07-15 11:41:01.481843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.980 [2024-07-15 11:41:01.481851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.980 [2024-07-15 11:41:01.481858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.980 [2024-07-15 11:41:01.485357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.980 [2024-07-15 11:41:01.494420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.980 [2024-07-15 11:41:01.495033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.980 [2024-07-15 11:41:01.495051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.980 [2024-07-15 11:41:01.495059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.495282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.495499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.495507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.495514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.499002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.508277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.508874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.508890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.508897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.509113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.509335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.509344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.509350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.512840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.522100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.522712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.522727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.522734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.522950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.523175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.523184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.523191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.526676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.535926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.536520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.536534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.536542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.536757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.536972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.536980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.536987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.540477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.549728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.550402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.550439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.550449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.550684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.550904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.550913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.550921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.554439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.563542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.564310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.564346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.564357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.564592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.564812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.564820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.564828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.568336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.577400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.578083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.578120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.578141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.578376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.578595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.578604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.578611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.582105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.591164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.591858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.591895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.591906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.592152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.592372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.592380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.592388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.595884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.604941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.605662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.605699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.605710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.605946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.606174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.606184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.606191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.609684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.618739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.619493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.619530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.619546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.619781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.620001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.620009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.620017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.623520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.632582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.633329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.633376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.633611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.981 [2024-07-15 11:41:01.633831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.981 [2024-07-15 11:41:01.633839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.981 [2024-07-15 11:41:01.633847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.981 [2024-07-15 11:41:01.637349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.981 [2024-07-15 11:41:01.646429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.981 [2024-07-15 11:41:01.647148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.981 [2024-07-15 11:41:01.647185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.981 [2024-07-15 11:41:01.647197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.981 [2024-07-15 11:41:01.647437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.982 [2024-07-15 11:41:01.647656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.982 [2024-07-15 11:41:01.647665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.982 [2024-07-15 11:41:01.647672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.982 [2024-07-15 11:41:01.651173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.982 [2024-07-15 11:41:01.660237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.982 [2024-07-15 11:41:01.660908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.982 [2024-07-15 11:41:01.660945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.982 [2024-07-15 11:41:01.660955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.982 [2024-07-15 11:41:01.661200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.982 [2024-07-15 11:41:01.661421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.982 [2024-07-15 11:41:01.661433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.982 [2024-07-15 11:41:01.661441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.982 [2024-07-15 11:41:01.664933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.982 [2024-07-15 11:41:01.673987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.982 [2024-07-15 11:41:01.674702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.982 [2024-07-15 11:41:01.674739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:32.982 [2024-07-15 11:41:01.674749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:32.982 [2024-07-15 11:41:01.674985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:32.982 [2024-07-15 11:41:01.675215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.982 [2024-07-15 11:41:01.675225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.982 [2024-07-15 11:41:01.675233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.982 [2024-07-15 11:41:01.678731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.687794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.688438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.688457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.688465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.688682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.688898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.688906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.688913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.692406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.701659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.702190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.702206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.702214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.702430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.702646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.702653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.702660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.706149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.715406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.716032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.716069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.716079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.716323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.716544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.716552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.716560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.720050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.729320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.730063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.730100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.730111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.730355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.730576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.730584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.730591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.734082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.743138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.743876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.743912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.743922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.744167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.744387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.744396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.744403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.747895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.756961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.757680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.757716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.757727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.757967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.758195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.758205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.758212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.761705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.770792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.771561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.771598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.771608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.771844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.772063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.772071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.772079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.775579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.784634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.785293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.785330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.785341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.785576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.785796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.785805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.785812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.789315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.798371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.799115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.799159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.799169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.799405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.799625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.799633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.799645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.803143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.812216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.812873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.812891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.812899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.244 [2024-07-15 11:41:01.813116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.244 [2024-07-15 11:41:01.813338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.244 [2024-07-15 11:41:01.813346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.244 [2024-07-15 11:41:01.813353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.244 [2024-07-15 11:41:01.816844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.244 [2024-07-15 11:41:01.826103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.244 [2024-07-15 11:41:01.826713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.244 [2024-07-15 11:41:01.826729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.244 [2024-07-15 11:41:01.826737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.826952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.827174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.827183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.827190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.830677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.839931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.840463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.840478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.840486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.840701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.840917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.840925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.840931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.844421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.853680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.854322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.854345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.854561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.854776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.854785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.854791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.858282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.867531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.868218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.868255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.868265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.868501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.868720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.868729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.868736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.872237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.881291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.882009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.882046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.882056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.882300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.882521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.882529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.882537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.886027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.895080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.895785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.895821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.895832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.896068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.896301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.896311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.896319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.899811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.908863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.909600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.909637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.909647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.909883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.910103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.910111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.910119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.913620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.922674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.923281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.923300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.923308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.923524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.923740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.923748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.923755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.927244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.245 [2024-07-15 11:41:01.936505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.245 [2024-07-15 11:41:01.937196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.245 [2024-07-15 11:41:01.937233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.245 [2024-07-15 11:41:01.937243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.245 [2024-07-15 11:41:01.937479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.245 [2024-07-15 11:41:01.937698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.245 [2024-07-15 11:41:01.937707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.245 [2024-07-15 11:41:01.937715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.245 [2024-07-15 11:41:01.941228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.507 [2024-07-15 11:41:01.950292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.507 [2024-07-15 11:41:01.950991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.507 [2024-07-15 11:41:01.951028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.507 [2024-07-15 11:41:01.951039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.507 [2024-07-15 11:41:01.951284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.507 [2024-07-15 11:41:01.951504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:01.951513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:01.951520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:01.955023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:01.964078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:01.964805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:01.964842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:01.964852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:01.965088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:01.965317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:01.965326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:01.965333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:01.968826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:01.977909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:01.978628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:01.978665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:01.978675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:01.978911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:01.979139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:01.979149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:01.979156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:01.982649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:01.991709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:01.992416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:01.992453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:01.992468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:01.992703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:01.992923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:01.992931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:01.992939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:01.996439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:02.005491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:02.006186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:02.006223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:02.006233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:02.006469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:02.006688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:02.006697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:02.006705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3730651 Killed "${NVMF_APP[@]}" "$@" 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.508 [2024-07-15 11:41:02.010212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3732306 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3732306 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3732306 ']' 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.508 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.508 [2024-07-15 11:41:02.019277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:02.019886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:02.019906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:02.019919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:02.020144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:02.020362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:02.020370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:02.020377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:02.023870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:02.033142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:02.033691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:02.033707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:02.033714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:02.033930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:02.034151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:02.034160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:02.034167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:02.037656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:02.046930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:02.047580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:02.047596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:02.047603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:02.047819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:02.048034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:02.048042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:02.048049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:02.051549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:02.060829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:02.061467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:02.061483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:02.061490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:02.061706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:02.061921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:02.061933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:02.061940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:02.065438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.508 [2024-07-15 11:41:02.069657] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:33.508 [2024-07-15 11:41:02.069712] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.508 [2024-07-15 11:41:02.074716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.508 [2024-07-15 11:41:02.075337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.508 [2024-07-15 11:41:02.075375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.508 [2024-07-15 11:41:02.075385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.508 [2024-07-15 11:41:02.075621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.508 [2024-07-15 11:41:02.075841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.508 [2024-07-15 11:41:02.075850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.508 [2024-07-15 11:41:02.075858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.508 [2024-07-15 11:41:02.079371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.088722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.089464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.089502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.089513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.089749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.089969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.089978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.089986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.093485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.509 [2024-07-15 11:41:02.102550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.103381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.103419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.103431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.103669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.103888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.103906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.103913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.107478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.116344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.117097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.117140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.117153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.117392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.117611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.117620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.117628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.121124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.130077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.130802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.130839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.130850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.131085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.131314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.131323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.131331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.134824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.143887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.144512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.144550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.144560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.144796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.145016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.145025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.145032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.148532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.151628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:33.509 [2024-07-15 11:41:02.157817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.158607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.158645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.158656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.158893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.159113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.159128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.159136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.162631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.171691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.172364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.172401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.172412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.172648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.172868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.172877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.172885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.176381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.185492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.186339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.186376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.186387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.186623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.186843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.186852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.186859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.190358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.199420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.509 [2024-07-15 11:41:02.200179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.509 [2024-07-15 11:41:02.200216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.509 [2024-07-15 11:41:02.200228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.509 [2024-07-15 11:41:02.200473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.509 [2024-07-15 11:41:02.200693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.509 [2024-07-15 11:41:02.200701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.509 [2024-07-15 11:41:02.200709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.509 [2024-07-15 11:41:02.204213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.509 [2024-07-15 11:41:02.205242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.509 [2024-07-15 11:41:02.205267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.509 [2024-07-15 11:41:02.205273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.509 [2024-07-15 11:41:02.205278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.509 [2024-07-15 11:41:02.205284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.509 [2024-07-15 11:41:02.205518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.509 [2024-07-15 11:41:02.205869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.509 [2024-07-15 11:41:02.205869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.772 [2024-07-15 11:41:02.213277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.772 [2024-07-15 11:41:02.214055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-07-15 11:41:02.214093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.772 [2024-07-15 11:41:02.214104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.772 [2024-07-15 11:41:02.214349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.772 [2024-07-15 11:41:02.214570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.772 [2024-07-15 11:41:02.214579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.772 [2024-07-15 11:41:02.214587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.772 [2024-07-15 11:41:02.218082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.772 [2024-07-15 11:41:02.227147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.772 [2024-07-15 11:41:02.227928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-07-15 11:41:02.227966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.772 [2024-07-15 11:41:02.227977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.772 [2024-07-15 11:41:02.228220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.772 [2024-07-15 11:41:02.228440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.772 [2024-07-15 11:41:02.228449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.772 [2024-07-15 11:41:02.228456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.231947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.241018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.241547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.241566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.241575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.241792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.242008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.242016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.242023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.245517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.254793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.255480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.255518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.255529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.255765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.255984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.255993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.256000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.259503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.268569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.269225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.269263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.269275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.269515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.269735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.269743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.269750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.273252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.282317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.283059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.283097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.283107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.283357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.283578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.283587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.283594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.287090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.296156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.296870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.296907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.296918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.297162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.297382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.297391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.297398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.300892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.309951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.310701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.310739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.310750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.310986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.311213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.311223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.311231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.314731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.323793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.773 [2024-07-15 11:41:02.324528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-07-15 11:41:02.324565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.773 [2024-07-15 11:41:02.324576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.773 [2024-07-15 11:41:02.324812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.773 [2024-07-15 11:41:02.325031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.773 [2024-07-15 11:41:02.325040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.773 [2024-07-15 11:41:02.325052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.773 [2024-07-15 11:41:02.328553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.773 [2024-07-15 11:41:02.337636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.338250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.338288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.338299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.338538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.338758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.338767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.338774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.342285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.351553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.352248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.352285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.352296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.352532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.352752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.352761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.352768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.356282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.365345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.366106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.366151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.366162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.366397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.366617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.366625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.366633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.370132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.379197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.379944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.379981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.379991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.380236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.380457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.380465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.380473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.383967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.393078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.393848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.393885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.393896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.394139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.394359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.394367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.394375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.397870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.406931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.407647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.407685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.407695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.407931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.408159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.408169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.408176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.411670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.420732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.421397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-07-15 11:41:02.421417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.774 [2024-07-15 11:41:02.421425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.774 [2024-07-15 11:41:02.421642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.774 [2024-07-15 11:41:02.421863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.774 [2024-07-15 11:41:02.421871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.774 [2024-07-15 11:41:02.421878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.774 [2024-07-15 11:41:02.425372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.774 [2024-07-15 11:41:02.434636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.774 [2024-07-15 11:41:02.435389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-07-15 11:41:02.435426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.775 [2024-07-15 11:41:02.435437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.775 [2024-07-15 11:41:02.435672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.775 [2024-07-15 11:41:02.435892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.775 [2024-07-15 11:41:02.435901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.775 [2024-07-15 11:41:02.435908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.775 [2024-07-15 11:41:02.439411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.775 [2024-07-15 11:41:02.448473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.775 [2024-07-15 11:41:02.449146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-07-15 11:41:02.449166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.775 [2024-07-15 11:41:02.449173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.775 [2024-07-15 11:41:02.449390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.775 [2024-07-15 11:41:02.449606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.775 [2024-07-15 11:41:02.449614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.775 [2024-07-15 11:41:02.449621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.775 [2024-07-15 11:41:02.453111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.775 [2024-07-15 11:41:02.462388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.775 [2024-07-15 11:41:02.463042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-07-15 11:41:02.463058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:33.775 [2024-07-15 11:41:02.463065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:33.775 [2024-07-15 11:41:02.463287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:33.775 [2024-07-15 11:41:02.463503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.775 [2024-07-15 11:41:02.463510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.775 [2024-07-15 11:41:02.463517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.775 [2024-07-15 11:41:02.467009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.036 [2024-07-15 11:41:02.476157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.036 [2024-07-15 11:41:02.476826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.036 [2024-07-15 11:41:02.476862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.036 [2024-07-15 11:41:02.476874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.036 [2024-07-15 11:41:02.477109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.036 [2024-07-15 11:41:02.477341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.036 [2024-07-15 11:41:02.477352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.036 [2024-07-15 11:41:02.477359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.036 [2024-07-15 11:41:02.480855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.036 [2024-07-15 11:41:02.489915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.036 [2024-07-15 11:41:02.490381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.036 [2024-07-15 11:41:02.490399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.036 [2024-07-15 11:41:02.490406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.036 [2024-07-15 11:41:02.490623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.490838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.490847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.490854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.494349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.503820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.504545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.504593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.504828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.505048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.505057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.505064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.508568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.517635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.518360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.518398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.518413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.518649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.518868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.518877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.518884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.522387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.531451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.532211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.532248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.532259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.532495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.532715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.532723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.532731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.536233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.545295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.545721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.545739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.545747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.545963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.546185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.546201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.546209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.549700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.559182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.559932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.559969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.559979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.560224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.560445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.560458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.560465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.563959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.573024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.573793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.573831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.573842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.574078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.574306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.574315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.574323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.577820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.037 [2024-07-15 11:41:02.586882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.037 [2024-07-15 11:41:02.587430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.037 [2024-07-15 11:41:02.587467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.037 [2024-07-15 11:41:02.587479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.037 [2024-07-15 11:41:02.587716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.037 [2024-07-15 11:41:02.587935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.037 [2024-07-15 11:41:02.587944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.037 [2024-07-15 11:41:02.587951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.037 [2024-07-15 11:41:02.591451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.600751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.601418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.601438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.601445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.601662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.601879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.601887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.601894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.605388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.614659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.615201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.615238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.615250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.615487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.615707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.615715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.615723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.619227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.628495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.629202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.629239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.629251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.629489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.629708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.629717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.629724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.633228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.642294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.643046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.643083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.643094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.643343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.643563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.643572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.643580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.647071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.656148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.656597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.656616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.656624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.656845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.657060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.657068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.657075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.660571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.670039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.670754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.670791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.670802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.671038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.671265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.671275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.671282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.674776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.683837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.684337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.684356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.684364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.684581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.684797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.684805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.684812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.688307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.038 [2024-07-15 11:41:02.697572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.038 [2024-07-15 11:41:02.698357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.038 [2024-07-15 11:41:02.698394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.038 [2024-07-15 11:41:02.698406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.038 [2024-07-15 11:41:02.698645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.038 [2024-07-15 11:41:02.698864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.038 [2024-07-15 11:41:02.698873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.038 [2024-07-15 11:41:02.698885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.038 [2024-07-15 11:41:02.702389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.039 [2024-07-15 11:41:02.711454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.039 [2024-07-15 11:41:02.711967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.039 [2024-07-15 11:41:02.712005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.039 [2024-07-15 11:41:02.712016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.039 [2024-07-15 11:41:02.712259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.039 [2024-07-15 11:41:02.712480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.039 [2024-07-15 11:41:02.712488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.039 [2024-07-15 11:41:02.712496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.039 [2024-07-15 11:41:02.715992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.039 [2024-07-15 11:41:02.725263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.039 [2024-07-15 11:41:02.725778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.039 [2024-07-15 11:41:02.725815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.039 [2024-07-15 11:41:02.725826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.039 [2024-07-15 11:41:02.726062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.039 [2024-07-15 11:41:02.726289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.039 [2024-07-15 11:41:02.726299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.039 [2024-07-15 11:41:02.726306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.039 [2024-07-15 11:41:02.729803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.739070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.739707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.739724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.739732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.739949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.740172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.740181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.740187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.301 [2024-07-15 11:41:02.743678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.752940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.753706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.753743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.753754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.753990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.754218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.754228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.754235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.301 [2024-07-15 11:41:02.757728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.766796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.767430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.767468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.767478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.767714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.767934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.767943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.767950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.301 [2024-07-15 11:41:02.771453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.780725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.781445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.781483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.781494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.781731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.781950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.781960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.781969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.301 [2024-07-15 11:41:02.785471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.794536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.795204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.795224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.795232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.795449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.795670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.795679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.795686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.301 [2024-07-15 11:41:02.799182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.808472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.809039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.809076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.809088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.809333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.809553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.809561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.809569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.301 [2024-07-15 11:41:02.813063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.301 [2024-07-15 11:41:02.822334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.301 [2024-07-15 11:41:02.823037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.301 [2024-07-15 11:41:02.823073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.301 [2024-07-15 11:41:02.823085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.301 [2024-07-15 11:41:02.823333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.301 [2024-07-15 11:41:02.823553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.301 [2024-07-15 11:41:02.823562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.301 [2024-07-15 11:41:02.823569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 [2024-07-15 11:41:02.827064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.836133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.836711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.836748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.836758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.836994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.837221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.837231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.837238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.302 [2024-07-15 11:41:02.840742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.850010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.850725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.850763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.850773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.851009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.851237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.851247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.851255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 [2024-07-15 11:41:02.854763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.863828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.864536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.864573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.864584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.864820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.865039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.865048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.865055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 [2024-07-15 11:41:02.868557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.877621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.878370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.878407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.878418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.878656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.878876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.878885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.878892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.302 [2024-07-15 11:41:02.882399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.884132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.302 [2024-07-15 11:41:02.891463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.892134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.892153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.892161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.892378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.892594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.892601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.892608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 [2024-07-15 11:41:02.896099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.905360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.906040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.906077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.906088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.906335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.906556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.906564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.906572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 [2024-07-15 11:41:02.910065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.919132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 [2024-07-15 11:41:02.919862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.919898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.919909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 [2024-07-15 11:41:02.920153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 [2024-07-15 11:41:02.920378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.920387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.920395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 Malloc0 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.302 [2024-07-15 11:41:02.923892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 [2024-07-15 11:41:02.932956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.302 [2024-07-15 11:41:02.933449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.302 [2024-07-15 11:41:02.933487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.302 [2024-07-15 11:41:02.933497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.302 [2024-07-15 11:41:02.933734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.302 [2024-07-15 11:41:02.933953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.302 [2024-07-15 11:41:02.933962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.302 [2024-07-15 11:41:02.933970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.302 [2024-07-15 11:41:02.937473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.302 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.303 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:34.303 [2024-07-15 11:41:02.946741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.303 [2024-07-15 11:41:02.947408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.303 [2024-07-15 11:41:02.947445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd03b0 with addr=10.0.0.2, port=4420 00:29:34.303 [2024-07-15 11:41:02.947456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd03b0 is same with the state(5) to be set 00:29:34.303 [2024-07-15 11:41:02.947692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd03b0 (9): Bad file descriptor 00:29:34.303 [2024-07-15 11:41:02.947912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.303 [2024-07-15 11:41:02.947920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.303 [2024-07-15 11:41:02.947927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.303 [2024-07-15 11:41:02.951432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.303 [2024-07-15 11:41:02.952175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.303 11:41:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.303 11:41:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3731290 00:29:34.303 [2024-07-15 11:41:02.960514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.563 [2024-07-15 11:41:03.172180] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:44.605 00:29:44.605 Latency(us) 00:29:44.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.605 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:44.605 Verification LBA range: start 0x0 length 0x4000 00:29:44.605 Nvme1n1 : 15.01 8281.95 32.35 10257.15 0.00 6878.45 1044.48 14964.05 00:29:44.605 =================================================================================================================== 00:29:44.605 Total : 8281.95 32.35 10257.15 0.00 6878.45 1044.48 14964.05 00:29:44.605 11:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:44.605 11:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.605 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.605 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.605 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:44.606 rmmod nvme_tcp 00:29:44.606 rmmod nvme_fabrics 00:29:44.606 rmmod nvme_keyring 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3732306 ']' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3732306 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3732306 ']' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3732306 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3732306 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3732306' 00:29:44.606 killing process with pid 3732306 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3732306 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3732306 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.606 11:41:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.547 11:41:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.547 00:29:45.547 real 0m27.640s 00:29:45.547 user 1m2.600s 00:29:45.547 sys 0m7.145s 00:29:45.547 11:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.547 11:41:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.547 ************************************ 00:29:45.547 END TEST nvmf_bdevperf 00:29:45.547 ************************************ 00:29:45.547 11:41:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:45.547 11:41:14 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:45.547 11:41:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:45.547 11:41:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.547 11:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.547 ************************************ 00:29:45.547 START TEST nvmf_target_disconnect 00:29:45.547 ************************************ 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:45.547 * Looking for test storage... 00:29:45.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:45.547 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:45.548 11:41:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.548 11:41:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:53.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:53.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:53.686 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:53.686 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.686 11:41:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:29:53.686 00:29:53.686 --- 10.0.0.2 ping statistics --- 00:29:53.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.686 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:29:53.686 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.435 ms 00:29:53.686 00:29:53.686 --- 10.0.0.1 ping statistics --- 00:29:53.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.686 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.687 ************************************ 00:29:53.687 START TEST nvmf_target_disconnect_tc1 00:29:53.687 ************************************ 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.687 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.687 [2024-07-15 11:41:21.458146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.687 [2024-07-15 11:41:21.458206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176ae20 with addr=10.0.0.2, port=4420 00:29:53.687 [2024-07-15 11:41:21.458238] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:53.687 [2024-07-15 11:41:21.458254] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:53.687 [2024-07-15 11:41:21.458262] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:53.687 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:53.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:53.687 Initializing NVMe Controllers 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:53.687 00:29:53.687 real 0m0.104s 00:29:53.687 user 0m0.044s 00:29:53.687 sys 0m0.057s 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:53.687 ************************************ 00:29:53.687 END TEST nvmf_target_disconnect_tc1 00:29:53.687 ************************************ 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:53.687 ************************************ 00:29:53.687 START TEST nvmf_target_disconnect_tc2 00:29:53.687 ************************************ 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3738351 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3738351 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3738351 ']' 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.687 11:41:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:53.687 [2024-07-15 11:41:21.597784] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:53.687 [2024-07-15 11:41:21.597836] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.687 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.687 [2024-07-15 11:41:21.685256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.687 [2024-07-15 11:41:21.778843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.687 [2024-07-15 11:41:21.778902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.687 [2024-07-15 11:41:21.778911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.687 [2024-07-15 11:41:21.778918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.687 [2024-07-15 11:41:21.778925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.687 [2024-07-15 11:41:21.779619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:53.687 [2024-07-15 11:41:21.779843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:53.687 [2024-07-15 11:41:21.780073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:53.687 [2024-07-15 11:41:21.780073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:53.687 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.687 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:53.687 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.687 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.687 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.949 Malloc0 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.949 [2024-07-15 11:41:22.455399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.949 [2024-07-15 11:41:22.495794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.949 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.950 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.950 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3738598 00:29:53.950 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:53.950 11:41:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.950 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.864 11:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3738351 00:29:55.864 11:41:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Write completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Write completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Write completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Write completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.864 Read completed with error (sct=0, sc=8) 00:29:55.864 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Write completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 Read completed with error (sct=0, sc=8) 00:29:55.865 starting I/O failed 00:29:55.865 [2024-07-15 11:41:24.528712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.865 [2024-07-15 11:41:24.529349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.529385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.529705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.529717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.529990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.530000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.530459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.530495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.530833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.530846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.531165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.531183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.531681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.531691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.532086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.532095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.532499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.532510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.532915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.532925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.533326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.533336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.533602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.533611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.533983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.533992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.534248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.534258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.534596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.534605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.534989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.534999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.535251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.535261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.535650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.535659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.536057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.536066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.536470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.536481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.536848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.536858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.537165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.537175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.537582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.537592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.537945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.537955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.538349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.538359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.538769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.538779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.539169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.539179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.539591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.539601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.539990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.540000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.540389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.540622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.540636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.540938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.540948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.541263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.865 [2024-07-15 11:41:24.541273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.865 qpair failed and we were unable to recover it. 00:29:55.865 [2024-07-15 11:41:24.541672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.542092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.542102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.542467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.542477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.542749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.542759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.542960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.542971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.543373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.543382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.543783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.543793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.544205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.544215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.544514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.544524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.544872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.544882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.545173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.545183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.545563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.545572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.545945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.545957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.546330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.546340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.546700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.546710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.547130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.547141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.547487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.547497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.547907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.547916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.548253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.548262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.548601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.548610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.548938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.548947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.549414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.549423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.549799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.549807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.550173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.550182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.550404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.550414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.550724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.550734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.551137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.551147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.551551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.551561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.551963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.551972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.552262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.552272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.552715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.552725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.553107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.553117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.553551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.553561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.553927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.553936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.554297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.554307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.554453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.554463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.554840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.554850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.555192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.555202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.555588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.555597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.555795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.866 [2024-07-15 11:41:24.555805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.866 qpair failed and we were unable to recover it. 00:29:55.866 [2024-07-15 11:41:24.556086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.556095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.556516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.556528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.556931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.556942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.557298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.557310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.557656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.557667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.558044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.558055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.558445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.558457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.558873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.558885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.559297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.559308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.559715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.559727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.560135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.560147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.560545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.560557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.560879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.560893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.561308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.561320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.561645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.561656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.562036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.562047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.562421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.562433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.562837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.562848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.563246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.563258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.563629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.563640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.564011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.564022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:55.867 [2024-07-15 11:41:24.564451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.867 [2024-07-15 11:41:24.564462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:55.867 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.564866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.564878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.565292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.565304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.565700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.565712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.566117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.566133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.566527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.566543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.566832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.566848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.567237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.567253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.567684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.567699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.568151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.568167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.568550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.568566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.568822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.569267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.569284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.569562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.569577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.569985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.570001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.570419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.570435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.570829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.570845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.571250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.571266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.571731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.571747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.572127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.572144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.572454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.572469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.572884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.572900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.573300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.573315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.573722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.573738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.574093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.574109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.574515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.138 [2024-07-15 11:41:24.574531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.138 qpair failed and we were unable to recover it. 00:29:56.138 [2024-07-15 11:41:24.574905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.574921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.575411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.575466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.575897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.575916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.576324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.576341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.576748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.576764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.577133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.577156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.577510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.577526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.577887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.577903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.578473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.578539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.578881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.578909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.579229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.579251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.579687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.579707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.580143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.580164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.580540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.580560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.580966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.580985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.581426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.581447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.581757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.581776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.582085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.582105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.582492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.582511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.582981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.583000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.583417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.583438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.583868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.583888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.584277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.584297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.584710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.584729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.585038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.585057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.585392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.585412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.585728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.585750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.586157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.586178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.586554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.586575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.586981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.587001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.587416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.587436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.587845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.587865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.588284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.588305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.588717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.588736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.589150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.589172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.589489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.589509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.589886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.589905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.590357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.590378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.590715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.591130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.591151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.139 qpair failed and we were unable to recover it. 00:29:56.139 [2024-07-15 11:41:24.591541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.139 [2024-07-15 11:41:24.591568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.591967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.591994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.592374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.592402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.592830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.592857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.593285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.593312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.593730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.593765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.594185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.594213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.594511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.594542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.594960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.594987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.595418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.595446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.595878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.595905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.596305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.596332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.596737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.596764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.597186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.597214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.597616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.597642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.598068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.598095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.598521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.598550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.598872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.598899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.599215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.599263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.599679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.599706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.600140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.600168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.600503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.600530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.600947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.600973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.601388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.601416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.601906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.601932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.602245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.602280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.602685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.602712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.603080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.603106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.603426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.603453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.603845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.603872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.604267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.604295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.604693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.604719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.605132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.605162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.605625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.605652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.606077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.606104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.606521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.606548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.606956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.606983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.607409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.607436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.607863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.607890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.608189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.608217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.608643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.140 [2024-07-15 11:41:24.608669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.140 qpair failed and we were unable to recover it. 00:29:56.140 [2024-07-15 11:41:24.609080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.609107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.609462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.609489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.609893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.609920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.610331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.610360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.610762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.610794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.611215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.611243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.611658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.611685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.612048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.612075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.612484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.612511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.612947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.612973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.613399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.613426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.613723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.613753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.614080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.614111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.614528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.614555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.614978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.615004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.615421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.615448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.615854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.615880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.616302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.616330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.616688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.616715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.617105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.617151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.617577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.617605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.618023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.618050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.618467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.618497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.618919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.618946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.619344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.619372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.619810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.619837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.620258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.620286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.620694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.620720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.621130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.621158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.621491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.621522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.621954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.621981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.622353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.622382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.622780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.622806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.623231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.623259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.623642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.623669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.624080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.624107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.624417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.624448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.624887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.624915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.625397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.625425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.625864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.625891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.141 [2024-07-15 11:41:24.626285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.141 [2024-07-15 11:41:24.626313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.141 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.626721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.626747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.627066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.627096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.627524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.627551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.627983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.628016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.628426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.628454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.628891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.628917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.629327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.629355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.629766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.629793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.630201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.630229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.630627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.630653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.631069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.631095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.631404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.631431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.631863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.631889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.632299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.632326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.632730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.632755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.633178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.633206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.633637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.633664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.634098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.634135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.634459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.634490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.634930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.634956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.635380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.635408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.635826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.635854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.636216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.636244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.636681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.636708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.637003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.637034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.637458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.637486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.637889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.637916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.638343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.638370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.638761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.638788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.639130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.639159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.639590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.639617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.639991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.142 [2024-07-15 11:41:24.640018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.142 qpair failed and we were unable to recover it. 00:29:56.142 [2024-07-15 11:41:24.640420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.640448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.640872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.640899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.641330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.641357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.641785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.641812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.642232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.642260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.642695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.642722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.643151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.643180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.643597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.643624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.644011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.644038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.644435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.644463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.644882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.644908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.645326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.645359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.645665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.645692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.646091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.646118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.646534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.646561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.646989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.647016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.647441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.647469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.647875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.647901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.648310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.648339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.648759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.649199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.649227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.649532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.649564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.649960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.649987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.650432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.650459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.650840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.650867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.651281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.651309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.651725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.651752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.652151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.652178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.652594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.143 [2024-07-15 11:41:24.652621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.143 qpair failed and we were unable to recover it. 00:29:56.143 [2024-07-15 11:41:24.652984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.653010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.653434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.653461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.653842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.653868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.654245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.654273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.654722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.654749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.655064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.655091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.655572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.655600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.656013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.656040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.656525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.656553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.656964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.656992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.657432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.657460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.657757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.657787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.658202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.658230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.658648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.658674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.658986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.659012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.659410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.659438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.659845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.659872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.660298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.660326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.660749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.660776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.661183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.661211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.661637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.661664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.662101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.662145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.662567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.662601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.662997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.663024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.663474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.663502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.663896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.663923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.664331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.664358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.664764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.664791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.665225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.665253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.665682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.665709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.666032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.666058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.144 [2024-07-15 11:41:24.666372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.144 [2024-07-15 11:41:24.666403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.144 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.666828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.666855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.667299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.667327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.667735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.667761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.668184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.668211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.668628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.668655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.669055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.669522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.669550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.669864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.669894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.670334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.670362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.670787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.670814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.671213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.671240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.671647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.671673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.672073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.672099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.672531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.672558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.672963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.672989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.673409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.673437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.673846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.673872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.674312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.674340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.674770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.674796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.675207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.675234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.675641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.675667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.675985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.676014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.676393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.676421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.676834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.676861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.677279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.677306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.677717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.677743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.678067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.678094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.678549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.678578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.679002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.679028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.679456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.145 [2024-07-15 11:41:24.679484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.145 qpair failed and we were unable to recover it. 00:29:56.145 [2024-07-15 11:41:24.679902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.679937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.680395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.680423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.680850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.680877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.681361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.681388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.681799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.681825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.682143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.682174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.682506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.682533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.682983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.683009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.683457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.683486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.683901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.683927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.684369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.684397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.684805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.684832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.685156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.685183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.685627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.685654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.685961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.685988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.686383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.686411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.686839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.686865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.687179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.687207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.687632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.687660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.688065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.688091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.688520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.688547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.688978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.689005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.689310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.689341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.689772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.689799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.690223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.690251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.690687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.690714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.691132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.691160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.691595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.691628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.692039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.692066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.692374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.692405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.692722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.692749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.693060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.693087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.146 [2024-07-15 11:41:24.693527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.146 [2024-07-15 11:41:24.693555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.146 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.694056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.694083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.694512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.694541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.694936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.694963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.695270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.695298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.695697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.695723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.696128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.696156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.696603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.696629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.697074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.697100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.697509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.697537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.698005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.698031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.698440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.698468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.698902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.698928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.699347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.699375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.699759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.699785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.700144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.700172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.700505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.700531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.700950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.700977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.701411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.701438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.701863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.701889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.702334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.702360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.702764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.702791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.703217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.703266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.703703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.703730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.704032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.704061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.704377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.704405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.704822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.704849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.705265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.705292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.705635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.705663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.705975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.706001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.147 [2024-07-15 11:41:24.706394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.147 [2024-07-15 11:41:24.706422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.147 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.706850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.706877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.707308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.707335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.707734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.707761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.708087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.708113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.708550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.708584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.709007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.709034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.709442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.709470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.709772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.709799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.710232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.710260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.710703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.710729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.711143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.711170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.711544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.711571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.711897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.711923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.712416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.712884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.712911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.713335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.713362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.713706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.713733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.714162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.714190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.714625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.714652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.715073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.715100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.715498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.148 [2024-07-15 11:41:24.715526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.148 qpair failed and we were unable to recover it. 00:29:56.148 [2024-07-15 11:41:24.715793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.715820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.716230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.716258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.716577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.716607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.717034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.717061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.717455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.717484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.717911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.717937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.718370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.718398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.718887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.719361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.719389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.719695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.719725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.720182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.720210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.720469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.720495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.720938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.720965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.721365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.721392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.721826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.721853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.722273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.722301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.722712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.722738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.723165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.723192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.723603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.723630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.724028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.724054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.724454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.724481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.724918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.724945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.725438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.725466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.725908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.725941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.726348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.726376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.726810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.726836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.727056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.727081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.727507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.727536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.727971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.727997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.728423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.149 [2024-07-15 11:41:24.728450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.149 qpair failed and we were unable to recover it. 00:29:56.149 [2024-07-15 11:41:24.728878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.728905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.729197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.729227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.729650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.729678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.730120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.730161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.730586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.730613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.731031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.731058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.731533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.731561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.732006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.732033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.732479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.732507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.732822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.732849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.733284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.733312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.733721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.733748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.734211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.734239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.734635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.734662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.735070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.735097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.735504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.735531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.735940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.735967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.736404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.736431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.736857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.736883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.737299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.737327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.737691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.737718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.738153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.738182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.738617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.738644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.739091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.739118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.739528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.739555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.739981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.740008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.740310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.740341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.740742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.740769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.741184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.741212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.741636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.150 [2024-07-15 11:41:24.741663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.150 qpair failed and we were unable to recover it. 00:29:56.150 [2024-07-15 11:41:24.742089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.742115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.742476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.742503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.742905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.742932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.743339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.743373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.743800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.743827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.744227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.744254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.744676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.744703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.744946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.744972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.745393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.745420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.745846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.745873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.746285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.746313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.746747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.746773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.747199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.747227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.747654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.747680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.748025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.748052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.748491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.748518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.748934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.748961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.749397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.749425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.749836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.749862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.750349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.750376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.750678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.750705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.751135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.751163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.751593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.751619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.752035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.752062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.752501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.752529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.753013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.753040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.753497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.753525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.753943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.753970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.754340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.754368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.754806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.754833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.755268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.755296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.755731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.755758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.151 [2024-07-15 11:41:24.756190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.151 [2024-07-15 11:41:24.756217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.151 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.756637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.756663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.756982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.757009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.757435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.757463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.757919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.757945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.758267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.758295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.758704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.758731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.759162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.759190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.759601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.759627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.760068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.760094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.760540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.760568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.760986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.761019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.761385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.761412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.761824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.761850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.762261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.762288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.762723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.762751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.763177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.763205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.763528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.763555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.763963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.763990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.764409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.764436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.764844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.764870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.765269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.765296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.765723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.765749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.766145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.766173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.766387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.766419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.766850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.766877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.767284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.767312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.767745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.767771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.768240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.768267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.768677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.768705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.768997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.152 [2024-07-15 11:41:24.769027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.152 qpair failed and we were unable to recover it. 00:29:56.152 [2024-07-15 11:41:24.769418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.769446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.769936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.769963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.770397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.770424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.770846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.770872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.771298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.771326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.771673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.771699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.772107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.772141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.772549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.772576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.773016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.773043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.773449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.773477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.773894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.773921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.774360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.774388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.774820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.774846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.775258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.775286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.775710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.775737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.776166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.776193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.776602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.776629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.776914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.776941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.777405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.777432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.777864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.777889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.778312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.778345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.778747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.778774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.779214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.779241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.779690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.779717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.780146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.780173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.780628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.780654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.781087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.781114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.781519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.781551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.781951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.153 [2024-07-15 11:41:24.781979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.153 qpair failed and we were unable to recover it. 00:29:56.153 [2024-07-15 11:41:24.782387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.782415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.782824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.782850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.783252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.783279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.783590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.783616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.784063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.784089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.784501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.784529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.784929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.784956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.785377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.785404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.785816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.785842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.786266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.786294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.786729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.786756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.787191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.787218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.787562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.787588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.787896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.787925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.788325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.788353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.788788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.788816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.789251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.789278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.789684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.789710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.790027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.790053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.790404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.790432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.790865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.790892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.791219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.791246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.791662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.791688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.792188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.792216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.792588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.792614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.793033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.154 [2024-07-15 11:41:24.793060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.154 qpair failed and we were unable to recover it. 00:29:56.154 [2024-07-15 11:41:24.793537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.793565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.793973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.793999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.794411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.794440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.794884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.794911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.795420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.795447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.795881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.795913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.796329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.796357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.796802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.796828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.797257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.797284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.797574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.797603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.798040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.798067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.798411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.798439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.798806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.798833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.799148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.799179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.799581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.799608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.800018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.800044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.800513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.800541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.800970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.800997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.801409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.801436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.801760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.801790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.802209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.802237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.802683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.802710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.803178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.803206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.803623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.803650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.804080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.804106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.804517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.804544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.804967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.804994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.805409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.805436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.805717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.805742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.806196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.155 [2024-07-15 11:41:24.806224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.155 qpair failed and we were unable to recover it. 00:29:56.155 [2024-07-15 11:41:24.806622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.806648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.807058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.807085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.807546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.807575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.807974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.808001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.808423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.808451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.808849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.808875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.809274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.809302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.809730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.809756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.810183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.810211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.810624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.810651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.811062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.811088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.811497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.811524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.811823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.811852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.812278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.812305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.812736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.812762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.813186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.813219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.813550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.813576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.814027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.814054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.814501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.814528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.814933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.814960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.815376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.815403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.815720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.815745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.816142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.816169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.816508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.816534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.816969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.816997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.817487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.817515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.817809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.817838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.818292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.818321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.818753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.818779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.819204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.819233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.156 qpair failed and we were unable to recover it. 00:29:56.156 [2024-07-15 11:41:24.819667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.156 [2024-07-15 11:41:24.819694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.820103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.820138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.820426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.820455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.820922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.820949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.821365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.821392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.821707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.821734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.822167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.822195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.822607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.822633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.823062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.823089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.823503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.823531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.823846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.823876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.824231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.824259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.824710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.824737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.825144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.825172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.825604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.825631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.826053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.826079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.826493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.826522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.826949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.826976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.827399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.827427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.827869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.827895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.828298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.828326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.828810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.828837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.829283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.829310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.829628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.829658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.157 [2024-07-15 11:41:24.830148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.157 [2024-07-15 11:41:24.830176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.157 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.830535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.830571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.830972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.830998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.831413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.831441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.831881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.831908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.832203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.832230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.832702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.832729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.833133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.833162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.833629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.833655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.834070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.834096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.834543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.834571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.834890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.834916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.835355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.835383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.835813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.835840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.836262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.836290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.836700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.836727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.837153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.837180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.837611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.837638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.838072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.838099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.838454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.838482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.838915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.838941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.839269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.839300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.839708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.839734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.840186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.840213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.840637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.840664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.841072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.431 [2024-07-15 11:41:24.841098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.431 qpair failed and we were unable to recover it. 00:29:56.431 [2024-07-15 11:41:24.841442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.841469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.841817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.841844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.842263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.842292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.842682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.842708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.843193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.843656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.843683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.844117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.844151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.844560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.844587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.845020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.845046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.845368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.845395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.845818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.845845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.846154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.846182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.846619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.846646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.847073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.847100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.847513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.847541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.847953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.847986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.848373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.848401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.848830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.848856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.849355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.849383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.849810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.849837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.850286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.850314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.850738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.850765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.851199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.851227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.851629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.851656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.852032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.852058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.852507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.852943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.852969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.853385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.853413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.853856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.853883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.854329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.854356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.854788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.854815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 [2024-07-15 11:41:24.855147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.432 [2024-07-15 11:41:24.855178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fa8000b90 with addr=10.0.0.2, port=4420 00:29:56.432 qpair failed and we were unable to recover it. 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Write completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Write completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Write completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Write completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Write completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.432 starting I/O failed 00:29:56.432 Read completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Write completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Write completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Read completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Read completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Write completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Write completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Write completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 Read completed with error (sct=0, sc=8) 00:29:56.433 starting I/O failed 00:29:56.433 [2024-07-15 11:41:24.855474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:56.433 [2024-07-15 11:41:24.855923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.855940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.856307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.856321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.856736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.856745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.857165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.857180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.857580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.857589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.858033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.858042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.858421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.858431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.858824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.858834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.859233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.859243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.859611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.859621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.860029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.860039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.860415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.860426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.860822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.860832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.861065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.861078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.861476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.861487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.861771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.861780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.862189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.862198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.862603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.862614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.863027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.863037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.863432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.863442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.863835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.863844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.864237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.864247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.864622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.864631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.865038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.865048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.865319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.865329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.865736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.865745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.866120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.866133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.866580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.866589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.867000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.867009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.867403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.867413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.867828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.867838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.868289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.868299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.868662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.868672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.869070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.869081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.869480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.869489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.869859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.869868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.870262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.870272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.870659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.870668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.871042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.433 [2024-07-15 11:41:24.871052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.433 qpair failed and we were unable to recover it. 00:29:56.433 [2024-07-15 11:41:24.871326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.871337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.871735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.871744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.872113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.872125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.872567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.872577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.872984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.872994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.873474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.873523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.873825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.873837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.874233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.874243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.874626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.874635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.875023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.875417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.875427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.875655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.875668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.876075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.876085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.876460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.876469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.876844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.876854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.877267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.877276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.877694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.877703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.878111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.878120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.878530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.878540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.878967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.878976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.879495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.879539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.879954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.879966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.880450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.880495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.880810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.880823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.881339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.881384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.881688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.881700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.882094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.882104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.882483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.882493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.882783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.882792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.883195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.883205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.883601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.883610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.883990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.884000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.884407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.884421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.884830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.884839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.885206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.885216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.885660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.885670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.886068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.886077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.886452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.886463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.886871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.886881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.887297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.887306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.887716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.887726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.434 [2024-07-15 11:41:24.888126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.434 [2024-07-15 11:41:24.888137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.434 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.888512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.888521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.888918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.888928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.889422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.889468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.889903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.889915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.890420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.890465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.890884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.890896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.891298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.891343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.891750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.891762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.892202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.892213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.892624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.892633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.893032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.893041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.893435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.893445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.893805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.893814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.894188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.894198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.894575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.894584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.894960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.894970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.895261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.895271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.895668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.895677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.896170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.896180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.896564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.896574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.896984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.896994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.897401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.897410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.897763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.897772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.898164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.898174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.898555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.898564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.898945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.898954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.899354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.899364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.899756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.899765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.435 [2024-07-15 11:41:24.900132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.435 [2024-07-15 11:41:24.900142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.435 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.900528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.900537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.900911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.900920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.901285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.901297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.901720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.901729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.902100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.902109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.902479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.902488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.902881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.902891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.903273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.903283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.903647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.903657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.904050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.904059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.904433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.904443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.904809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.904818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.905228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.905237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.905637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.905646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.906011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.906020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.906418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.906428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.906799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.906809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.907224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.907234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.907604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.907613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.908016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.908026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.908415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.908425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.908826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.908836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.909235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.909245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.909662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.909671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.910057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.910067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.910456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.910466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.910749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.910758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.911142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.911152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.911530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.911540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.911953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.911965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.912331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.912341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.912729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.912738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.913145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.913154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.913561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.913571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.913962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.913972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.914361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.914370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.914741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.914750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.915036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.915046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.915425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.915435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.915842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.915852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.916259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.916269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.916649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.916659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.917054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.917063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.917479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.917490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.917891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.436 [2024-07-15 11:41:24.917900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.436 qpair failed and we were unable to recover it. 00:29:56.436 [2024-07-15 11:41:24.918266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.918275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.918695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.918705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.918893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.918909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.919327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.919337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.919719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.919728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.920095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.920104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.920491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.920501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.920908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.920918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.921404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.921448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.921866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.921878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.922136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.922148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.922568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.922578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.922982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.922992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.923487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.923530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.924021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.924033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.924521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.924564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.924981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.924993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.925502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.925546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.925960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.925974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.926462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.926504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.926923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.926935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.927412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.927456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.927858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.927870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.928326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.928370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.928699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.928711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.929116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.929138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.929598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.929608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.930018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.930028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.930517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.930528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.930947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.930958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.931462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.931504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.931918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.931935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.932425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.932467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.932891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.932904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.933434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.933476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.933881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.933894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.934385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.934428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.934833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.934846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.935239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.935250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.935650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.935661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.936051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.936061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.936432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.936442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.936728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.936739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.937146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.937157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.437 [2024-07-15 11:41:24.937527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.437 [2024-07-15 11:41:24.937536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.437 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.937961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.937970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.938377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.938388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.938756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.938765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.939167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.939177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.939548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.939558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.939879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.939889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.940278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.940288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.940702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.940715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.941107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.941118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.941496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.941505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.941883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.941892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.942277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.942287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.942685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.942694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.942958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.942969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.943253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.943264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.943550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.943559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.943965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.943974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.944368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.944378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.944784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.944793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.945007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.945016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.945427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.945437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.945827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.945836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.946127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.946138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.946501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.946510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.946914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.946923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.947450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.947491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.947979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.947991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.948482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.948524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.948890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.948901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.949405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.949447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.949859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.949871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.950396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.950438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.950924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.950936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.951433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.951474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.951877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.951890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.952394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.952436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.952831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.952843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.953119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.953137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.953553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.953562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.953934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.953943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.954160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.954174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.954526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.954536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.954912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.954921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.955327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.955337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.955716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.955725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.956017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.956027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.956426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.438 [2024-07-15 11:41:24.956435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.438 qpair failed and we were unable to recover it. 00:29:56.438 [2024-07-15 11:41:24.956839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.956849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.957256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.957270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.957666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.957676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.958035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.958045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.958436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.958446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.958840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.958850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.959239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.959249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.959696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.959705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.960065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.960074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.960464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.960474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.960838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.960847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.961217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.961227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.961617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.961626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.962042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.962052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.962464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.962474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.962853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.962862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.963226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.963236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.963634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.963645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.964031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.964040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.964420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.964429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.964781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.964791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.965176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.965185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.965554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.965564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.965970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.965980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.966409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.966418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.966849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.966858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.967241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.967250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.439 [2024-07-15 11:41:24.967640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.439 [2024-07-15 11:41:24.967649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.439 qpair failed and we were unable to recover it. 00:29:56.440 [2024-07-15 11:41:24.968037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.440 [2024-07-15 11:41:24.968046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.968430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.968440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.968802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.968812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.969223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.969233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.969618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.969627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.970026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.970035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.970259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.970271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.970687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.970696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.971062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.971071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.971442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.971452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.971823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.971833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.972356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.972365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.972775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.972784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.973194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.973204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.973599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.973609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.973968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.973978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.974367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.974377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.974757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.974766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.975177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.975186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.975564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.975573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.976051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.976060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.976432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.976442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.976837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.976846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.977253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.977262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.977462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.977473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.977873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.977883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.978244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.978254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.978674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.978683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.979087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.979097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.979465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.979475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.979832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.979849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.980301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.980311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.980749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.980758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.981119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.981141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.981513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.981522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.981922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.981931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.982398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.982438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.982900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.982912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.983393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.983434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.983859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.983870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.441 [2024-07-15 11:41:24.984337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.441 [2024-07-15 11:41:24.984377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.441 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.984805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.984821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.985202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.985212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.985602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.985612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.986000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.986009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.986373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.986382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.986776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.986786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.987039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.987051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.987438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.987448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.987851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.987861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.988273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.988283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.988664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.988673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.989080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.989089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.989295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.989304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.989709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.989718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.990085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.990094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.990514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.990523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.990794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.990804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.991212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.991221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.991597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.991606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.991922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.991932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.992322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.992331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.992754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.992763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.993176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.993185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.993561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.993570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.993940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.993950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.994316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.994326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.994721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.994731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.994982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.994992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.995364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.995374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.995760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.995770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.996154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.996164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.996541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.996551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.996950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.996959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.997355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.997365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.997770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.997779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.998216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.998227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.998532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.998542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.998913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.998922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.999311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.999321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:24.999690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:24.999700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:25.000071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.442 [2024-07-15 11:41:25.000080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.442 qpair failed and we were unable to recover it. 00:29:56.442 [2024-07-15 11:41:25.000453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.000464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.000828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.000837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.001080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.001090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.001423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.001432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.001807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.001816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.002223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.002233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.002642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.002652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.003040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.003050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.003501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.003512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.003911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.003920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.004290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.004299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.004556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.004565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.004956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.004965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.005336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.005346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.005771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.005781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.006175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.006184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.006589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.006599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.006988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.006998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.007455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.007466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.007938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.007947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.008444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.008487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.008909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.008922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.009418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.009457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.009923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.009935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.010404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.010444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.010857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.010869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.011352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.011392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.011705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.011722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.012108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.012118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.012492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.012502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.012905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.012915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.013373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.013413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.013839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.013851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.014396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.014435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.014854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.014866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.015243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.015262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.015652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.015662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.016103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.016113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.016491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.016501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.016891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.016902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.017394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.017435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.017859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.017871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.443 [2024-07-15 11:41:25.018254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.443 [2024-07-15 11:41:25.018265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.443 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.018671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.018681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.019080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.019090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.019499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.019510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.019917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.019926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.020473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.020513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.020941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.020953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.021370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.021410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.021873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.021885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.022351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.022390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.022819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.022831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.023201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.023212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.023602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.023611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.024018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.024027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.024455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.024465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.024833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.024843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.025230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.025241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.025630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.025639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.026002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.026012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.026361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.026371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.026760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.026769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.027172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.027181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.027557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.027567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.027973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.027982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.028385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.028395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.028800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.028809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.029214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.029227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.029629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.029639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.030026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.030036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.030328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.030338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.030724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.030733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.031141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.031152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.031541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.031550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.031954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.031963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.032308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.032318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.032725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.032734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.033136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.033145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.033558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.033568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.033988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.033997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.034403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.034414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.034848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.034858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.035247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.035257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.035644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.035654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.036060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.036070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.444 qpair failed and we were unable to recover it. 00:29:56.444 [2024-07-15 11:41:25.036458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.444 [2024-07-15 11:41:25.036468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.036871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.036880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.037284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.037294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.037700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.037709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.038025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.038035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.038427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.038437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.038842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.038851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.039256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.039265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.039654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.039663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.040061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.040073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.040528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.040537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.040907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.040916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.041310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.041320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.041709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.041718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.042130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.042140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.042528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.042537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.042934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.042944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.043437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.043476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.043803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.043815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.044023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.044036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.044449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.044459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.044823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.044832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.045202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.045211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.045606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.045616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.045865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.045875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.046261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.046277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.046671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.046680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.047091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.047100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.047466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.047476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.047896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.047906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.048211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.048221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.048585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.048594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.048882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.048892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.049282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.049291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.049727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.445 [2024-07-15 11:41:25.049736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.445 qpair failed and we were unable to recover it. 00:29:56.445 [2024-07-15 11:41:25.050107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.050116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.050540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.050550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.050943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.050953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.051439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.051478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.051882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.051894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.052382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.052421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.052842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.052853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.053331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.053369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.053779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.053791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.054174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.054184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.054551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.054561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.054931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.054941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.055279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.055289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.055684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.055694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.056081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.056090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.056471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.056486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.056890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.056899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.057263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.057272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.057689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.057698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.058083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.058093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.058578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.058588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.058972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.058981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.059477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.059515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.059933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.059944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.060443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.060482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.060900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.060912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.061420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.061460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.061875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.061887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.062395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.062435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.062824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.062836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.063223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.063236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.063628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.063638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.064011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.064021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.064489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.064499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.064860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.064869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.065286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.065295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.065701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.065710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.065995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.066005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.066458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.066468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.066832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.066841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.446 qpair failed and we were unable to recover it. 00:29:56.446 [2024-07-15 11:41:25.067239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.446 [2024-07-15 11:41:25.067249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.067596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.067606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.067860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.067874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.068239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.068249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.068667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.068676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.069073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.069083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.069476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.069486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.069886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.069895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.070306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.070316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.070719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.070728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.071130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.071139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.071506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.071515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.071874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.071883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.072359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.072397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.072815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.072827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.073214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.073224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.073629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.073640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.074014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.074023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.074428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.074438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.074708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.074717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.075105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.075114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.075474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.075484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.075766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.075775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.076112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.076121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.076487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.076496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.076898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.076907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.077276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.077286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.077699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.077708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.078138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.078149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.078527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.078537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.078924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.078934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.079423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.079462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.079872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.079884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.080364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.080403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.080825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.080836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.081200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.081210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.081668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.081678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.082048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.082057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.082442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.082452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.082821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.082831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.083084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.083096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.083483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.083493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.083786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.083796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.084185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.084198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.084575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.084584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.084949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.084959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.085323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.085333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.085768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.085778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.086184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.086194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.086583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.086592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.447 qpair failed and we were unable to recover it. 00:29:56.447 [2024-07-15 11:41:25.086986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.447 [2024-07-15 11:41:25.086996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.087380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.087390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.087756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.087765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.088167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.088176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.088640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.088649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.089010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.089019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.089419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.089428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.089776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.089785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.089992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.090004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.090380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.090389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.090739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.090748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.091121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.091134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.091500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.091510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.091890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.091899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.092308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.092317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.092685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.092694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.093097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.093106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.093393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.093404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.093837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.093846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.094254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.094264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.094668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.094880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.094891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.095270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.095279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.095643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.095653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.096040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.096049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.096395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.096406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.096811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.096820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.097020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.097030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.097296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.097306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.097690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.097699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.098105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.098114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.098491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.098501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.098903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.098912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.099309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.099722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.099732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.100143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.100153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.100546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.100555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.100961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.100970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.101341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.101350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.101758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.101768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.102156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.102166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.102556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.102565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.102817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.102829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.103227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.103239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.103606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.103615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.104001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.104010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.104389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.104398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.104785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.104794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.105104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.105114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.448 [2024-07-15 11:41:25.105505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.448 [2024-07-15 11:41:25.105514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.448 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.105921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.105930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.106305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.106315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.106701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.106711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.107005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.107014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.107328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.107337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.107726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.107735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.108206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.108215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.108600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.108609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.109013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.109022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.109433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.109443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.109811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.109821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.110154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.110167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.110551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.110560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.110943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.110952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.111319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.111328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.111706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.111715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.112149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.112158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.112544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.112553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.112942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.112951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.113245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.113261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.113666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.113676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.114079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.114088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.114461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.114470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.114678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.114690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.115072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.115081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.115463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.115473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.115880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.115889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.116251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.116261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.116674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.116684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.116898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.116908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.117308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.117318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.117704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.117713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.118113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.118128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.118487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.118496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.118802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.118810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.119207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.119224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.119594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.119603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.119972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.119980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.120268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.120278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.449 [2024-07-15 11:41:25.120690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.449 [2024-07-15 11:41:25.120699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.449 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.121073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.121083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.121468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.121478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.121922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.121932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.122461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.122499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.122909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.122921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.123409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.123446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.123856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.123868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.124229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.124239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.124656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.124666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.125051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.125061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.125487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.125497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.125896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.125906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.126417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.126455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.126871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.126883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.127285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.127296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.127685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.127694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.128107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.128116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.128546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.128556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.128952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.128963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.129305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.129344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.129738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.129750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.130133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.130144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.130522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.130531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.130893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.130902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.131406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.131443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.131854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.131866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.132351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.132388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.132805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.132817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.133183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.133551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.133560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.133948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.133957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.134359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.134369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.134783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.134792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.135182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.135192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.724 qpair failed and we were unable to recover it. 00:29:56.724 [2024-07-15 11:41:25.135547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.724 [2024-07-15 11:41:25.135563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.135947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.135956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.136320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.136329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.136713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.136723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.137107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.137116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.137547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.137561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.137970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.137979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.138478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.138515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.138939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.138950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.139450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.139488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.139887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.139899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.140410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.140447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.140867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.140879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.141403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.141440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.141929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.141941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.142419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.142457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.142876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.142887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.143289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.143327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.143742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.143754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.144157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.144168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.144564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.144573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.145026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.145036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.145421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.145430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.145833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.145842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.146258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.146268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.146643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.146652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.147053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.147063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.147499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.147509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.147869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.147878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.148250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.148260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.148558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.148567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.148950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.148959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.149359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.149369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.149724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.149733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.150139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.150149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.150540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.150549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.150935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.150944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.151303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.151312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.151699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.151708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.152093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.152102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.152500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.725 [2024-07-15 11:41:25.152509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.725 qpair failed and we were unable to recover it. 00:29:56.725 [2024-07-15 11:41:25.152911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.152920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.153387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.153424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.153848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.153859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.154221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.154231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.154620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.154629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.155036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.155050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.155430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.155440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.155826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.155835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.156232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.156242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.156610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.156618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.157022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.157031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.157438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.157447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.157806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.157815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.158222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.158231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.158713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.158722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.159119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.159133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.159504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.159513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.159915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.159924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.160393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.160430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.160817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.160829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.161229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.161240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.161657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.161666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.162055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.162064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.162461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.162471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.162857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.162866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.163241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.163251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.163646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.163655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.164035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.164045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.164443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.164453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.164840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.164850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.165238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.165248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.165630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.165639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.166030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.166041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.166414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.166424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.166809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.166818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.167210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.167219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.167586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.167595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.168029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.168039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.168438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.168448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.168850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.726 [2024-07-15 11:41:25.168859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.726 qpair failed and we were unable to recover it. 00:29:56.726 [2024-07-15 11:41:25.169260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.169270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.169638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.169648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.170033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.170042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.170358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.170367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.170751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.170760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.171160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.171169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.171608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.171617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.171979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.171988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.172386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.172396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.172760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.172769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.173080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.173089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.173481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.173490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.173891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.173900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.174279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.174288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.174691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.174700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.175092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.175102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.175324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.175337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.175757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.175766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.176159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.176170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.176573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.176582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.176940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.176949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.177361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.177370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.177685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.177694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.178078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.178087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.178451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.178461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.178828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.178838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.179222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.179231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.179625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.179635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.179959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.179969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.180398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.180408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.180767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.180776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.181152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.181162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.181526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.181535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.181914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.181926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.182316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.182326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.182724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.182733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.183104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.183113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.183620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.183629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.184064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.184073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.184445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.184455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.727 qpair failed and we were unable to recover it. 00:29:56.727 [2024-07-15 11:41:25.184749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.727 [2024-07-15 11:41:25.184759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.185144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.185154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.185552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.185561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.185963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.185972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.186339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.186348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.186636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.186645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.187061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.187070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.187441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.187450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.187860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.187869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.188286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.188296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.188740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.188749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.189088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.189490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.189500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.189908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.189918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.190406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.190444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.190857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.190869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.191339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.191375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.191791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.191803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.192205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.192216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.192585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.192595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.192999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.193012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.193463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.193473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.193848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.193858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.194252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.194262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.194653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.194662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.195005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.195014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.195393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.195403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.195784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.195793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.196086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.196095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.728 [2024-07-15 11:41:25.196490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.728 [2024-07-15 11:41:25.196500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.728 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.196858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.196868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.197241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.197250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.197558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.197567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.197954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.197963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.198366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.198375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.198743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.198752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.199114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.199134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.199523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.199532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.199970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.199979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.200471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.200508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.200780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.200793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.201220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.201230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.201631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.201640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.201998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.202007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.202455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.202464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.202831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.202840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.203201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.203211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.203563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.203572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.203960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.203970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.204380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.204390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.204779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.204788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.205068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.205077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.205516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.205526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.205930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.205939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.206397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.206433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.206850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.206862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.207368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.207405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.207813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.207824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.208101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.208110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.208312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.208327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.208715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.208725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.209132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.209146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.209535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.209544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.209936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.209945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.210423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.210460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.210872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.210884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.211319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.211357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.211752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.211764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.212152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.212163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.212545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.212554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.212920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.212930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.213314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.213324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.213715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.213724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.214135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.214145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.729 [2024-07-15 11:41:25.214552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.729 [2024-07-15 11:41:25.214562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.729 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.214957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.214966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.215352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.215362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.215766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.215775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.216142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.216152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.216520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.216529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.216921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.216931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.217334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.217343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.217633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.217642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.218025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.218034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.218415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.218424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.218781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.218791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.219175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.219184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.219621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.219630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.220002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.220013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.220381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.220390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.220757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.220773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.221100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.221109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.221406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.221415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.221875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.221884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.222338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.222347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.222719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.222728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.223085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.223094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.223501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.223510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.223900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.223909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.224423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.224459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.224875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.224886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.225382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.225419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.225841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.225854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.226242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.226253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.226684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.226693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.227095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.227104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.227379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.227389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.227779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.227788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.228202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.228211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.228583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.228593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.228990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.229000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.229406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.229416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.229774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.229784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.230155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.230165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.230550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.230559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.230924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.730 [2024-07-15 11:41:25.230933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.730 qpair failed and we were unable to recover it. 00:29:56.730 [2024-07-15 11:41:25.231335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.231345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.231741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.231750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.232137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.232147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.232588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.232597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.232977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.232986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.233476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.233513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.233921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.233933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.234402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.234439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.234855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.234867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.235349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.235386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.235807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.235818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.236340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.236793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.236805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.237226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.237240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.237496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.237505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.237902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.237911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.238276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.238286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.238701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.238711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.239094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.239103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.239474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.239484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.239697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.239710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.240136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.240146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.240534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.240544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.240943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.240952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.241438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.241475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.241886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.241898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.242336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.242373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.242791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.242803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.243204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.243215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.243583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.243593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.243979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.243989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.244368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.244377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.244739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.244748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.245160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.245169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.245527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.245536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.245920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.245929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.246287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.246297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.246716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.246725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.247113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.247126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.247519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.247529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.247916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.247925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.731 qpair failed and we were unable to recover it. 00:29:56.731 [2024-07-15 11:41:25.248439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.731 [2024-07-15 11:41:25.248476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.248895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.248907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.249396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.249432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.249792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.249803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.250214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.250225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.250624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.250633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.250962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.250971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.251255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.251264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.251681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.251690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.252059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.252068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.252439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.252449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.252820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.252830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.253275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.253285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.253564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.253574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.253981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.253990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.254351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.254360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.254754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.254763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.255129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.255139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.255429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.255439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.255834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.255843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.256271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.256281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.256680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.256689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.257099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.257108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.257479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.257489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.257890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.257899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.258385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.258422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.258771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.258782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.259188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.259199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.259584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.259595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.259999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.260008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.260375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.260385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.260769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.260779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.261144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.261154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.261415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.732 [2024-07-15 11:41:25.261424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.732 qpair failed and we were unable to recover it. 00:29:56.732 [2024-07-15 11:41:25.261703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.261713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.262096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.262105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.262507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.262516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.262917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.262925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.263329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.263339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.263763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.263772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.264176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.264188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.264395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.264407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.264813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.264823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.265278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.265288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.265658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.265668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.266051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.266061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.266450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.266460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.266825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.266834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.267234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.267244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.267632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.267641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.267941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.267950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.268298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.268308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.268678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.268688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.269070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.269080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.269333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.269344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.269741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.269750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.270115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.270129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.270519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.270528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.270913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.270922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.271320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.271332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.271696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.271705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.271991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.272000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.272396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.272406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.272769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.272778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.273290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.273326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.273754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.273766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.274162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.274172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.274556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.274566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.274931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.274940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.275301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.275311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.275707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.275717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.275981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.275990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.276400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.276410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.276812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.276822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.277139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.277149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.277542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.277552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.277916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.277925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.278402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.278439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.733 qpair failed and we were unable to recover it. 00:29:56.733 [2024-07-15 11:41:25.278860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.733 [2024-07-15 11:41:25.278872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.279352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.279389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.279799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.279811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.280341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.280385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.280798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.280809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.281120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.281136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.281527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.281536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.281792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.281803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.282249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.282259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.282643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.282653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.283057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.283067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.283444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.283453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.283830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.284222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.284231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.284597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.284607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.284894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.284904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.285291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.285301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.285688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.285697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.286079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.286088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.286473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.286482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.286778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.286787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.287171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.287181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.287554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.287563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.287953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.287962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.288355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.288365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.288753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.288762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.289135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.289145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.289538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.289548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.289938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.289948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.290431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.290468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.290879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.290895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.291409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.291446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.291864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.291876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.292394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.292431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.292918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.292930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.293443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.293479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.293837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.293849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.294389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.294425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.294831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.294843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.295237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.295247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.295746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.295755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.734 qpair failed and we were unable to recover it. 00:29:56.734 [2024-07-15 11:41:25.296146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.734 [2024-07-15 11:41:25.296155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.296487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.296497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.296886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.296895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.297303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.297314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.297723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.297733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.298141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.298151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.298484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.298493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.298895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.298904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.299279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.299288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.299653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.299663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.300039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.300048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.300417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.300426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.300811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.300820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.301227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.301236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.301604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.301613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.301942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.301951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.302349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.302358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.302723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.302733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.303139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.303149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.303584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.303593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.303972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.303981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.304387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.304397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.304805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.304815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.305134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.305144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.305553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.305562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.305965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.305974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.306503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.306540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.306940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.306953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.307322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.307359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.307769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.307780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.308055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.308070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.308464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.308474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.308783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.308793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.309188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.309198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.309563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.309573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.309961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.309970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.310372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.310810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.310819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.311179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.311189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.311539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.311549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.311933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.311942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.735 [2024-07-15 11:41:25.312310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.735 [2024-07-15 11:41:25.312320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.735 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.312696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.312706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.313099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.313109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.313534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.313544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.313916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.313926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.314401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.314438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.314858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.314869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.315132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.315143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.315526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.315535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.315936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.315946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.316420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.316457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.316868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.316880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.317406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.317442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.317691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.317704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.318098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.318108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.318476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.318486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.318857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.318871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.319338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.319375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.319803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.319815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.320185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.320195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.320395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.320408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.320743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.320752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.321215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.321225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.321597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.321606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.321974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.321983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.322371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.322380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.322764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.322773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.323163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.323172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.323555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.323564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.323871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.323880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.324268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.324278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.324680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.324689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.325051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.325060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.325438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.325448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.325733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.325742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.326119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.326133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.326498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.326507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.326896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.326906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.327326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.327335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.327695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.736 [2024-07-15 11:41:25.327704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.736 qpair failed and we were unable to recover it. 00:29:56.736 [2024-07-15 11:41:25.328086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.328095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.328443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.328452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.328813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.328823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.329182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.329192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.329576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.329585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.329964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.329973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.330376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.330385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.330857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.330865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.331353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.331390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.331814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.331825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.332198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.332208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.332568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.332579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.332969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.332978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.333345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.333356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.333764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.334236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.334245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.334629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.334638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.334951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.334964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.335261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.335271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.335662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.335672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.336081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.336090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.336365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.336376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.336760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.336770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.337139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.337149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.337540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.337549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.337936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.337945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.338452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.338490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.338882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.338894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.339405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.339442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.339863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.339875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.340304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.340342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.340723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.340736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.341055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.737 [2024-07-15 11:41:25.341065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.737 qpair failed and we were unable to recover it. 00:29:56.737 [2024-07-15 11:41:25.341494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.341504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.341864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.341874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.342258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.342267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.342631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.342640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.343001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.343010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.343415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.343425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.343787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.343796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.344148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.344157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.344561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.344571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.344849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.344858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.345241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.345251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.345615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.345627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.346011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.346021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.346433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.346443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.346842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.346851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.347214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.347223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.347602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.347612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.347995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.348004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.348368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.348378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.348764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.348774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.349146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.349156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.349551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.349560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.349837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.349846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.350220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.350230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.350594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.350603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.350969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.350978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.351383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.351392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.351793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.351802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.352192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.352201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.352611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.352620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.353003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.353012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.353420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.353431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.353813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.353822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.354113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.354127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.354512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.354521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.354927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.354936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.355397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.355433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.355776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.355787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.356185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.356196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.356575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.356584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.357005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.738 qpair failed and we were unable to recover it. 00:29:56.738 [2024-07-15 11:41:25.357417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.738 [2024-07-15 11:41:25.357426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.357795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.357804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.358191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.358201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.358589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.358598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.358961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.358970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.359355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.359365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.359676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.359685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.360081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.360090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.360463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.360472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.360841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.360850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.361297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.361307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.361590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.361604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.361889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.361898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.362303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.362313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.362726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.362735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.363104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.363113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.363486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.363495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.363805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.363815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.364088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.364097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.364460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.364469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.364837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.364846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.365137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.365148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.365385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.365398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.365778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.365788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.366175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.366186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.366592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.366601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.367002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.367011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.367328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.367338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.367713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.367722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.368156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.368166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.368587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.368596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.369002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.369011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.369298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.369308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.369693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.369702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.370108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.370118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.370524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.370533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.370923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.370932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.371365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.371375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.371746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.371755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.372174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.372184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.372541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.372551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.739 qpair failed and we were unable to recover it. 00:29:56.739 [2024-07-15 11:41:25.372936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.739 [2024-07-15 11:41:25.372945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.373351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.373360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.373729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.373738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.374099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.374109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.374512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.374521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.374941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.374950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.375417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.375454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.375842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.375854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.376321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.376358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.376763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.376775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.377050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.377059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.377404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.377415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.377781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.377791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.378205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.378215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.378580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.378589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.378968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.378978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.379363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.379373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.379775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.379784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.380145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.380154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.380521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.380530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.380913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.380922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.381307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.381318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.381705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.381714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.382070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.382079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.382372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.382382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.382739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.383148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.383157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.383387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.383399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.383695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.383705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.384009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.384018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.384413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.384422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.384821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.384831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.385217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.385227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.385606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.385616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.386001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.386009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.386362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.386372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.386668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.386677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.387035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.387044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.387416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.387429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.387818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.387828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.740 [2024-07-15 11:41:25.388234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.740 [2024-07-15 11:41:25.388243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.740 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.388653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.388661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.389021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.389030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.389411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.389421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.389828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.389837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.390205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.390214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.390604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.390613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.390997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.391006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.391378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.391388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.391795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.391804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.392187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.392197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.392582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.392591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.392993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.393002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.393389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.393398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.393799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.393808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.394157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.394166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.394602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.394612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.394991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.395000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.395400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.395409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.395768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.395777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.396170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.396180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.396559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.396568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.396971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.396980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.397383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.397393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.397794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.397803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.398079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.398088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.398471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.398481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.398862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.398872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.399274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.399283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.399675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.399685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.400073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.400083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.400465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.400475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.400858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.400867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.741 qpair failed and we were unable to recover it. 00:29:56.741 [2024-07-15 11:41:25.401233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.741 [2024-07-15 11:41:25.401243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.401635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.401644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.402029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.402038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.402454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.402463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.402869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.402878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.403245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.403254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.403673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.403682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.403857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.403867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.404229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.404239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.404634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.404643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.405076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.405085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.405480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.405490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.405898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.405907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.406236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.406245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.406615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.406625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.407014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.407024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.407414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.407424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.407827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.407836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.408234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.408243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.408674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.408683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.409048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.409057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.409462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.409472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.409687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.409699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.410095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.410105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.410521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.410530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.411033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.411042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.411405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.411414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.411806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.411816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.412220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.412230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:56.742 [2024-07-15 11:41:25.412635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.742 [2024-07-15 11:41:25.412644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:56.742 qpair failed and we were unable to recover it. 00:29:57.019 [2024-07-15 11:41:25.413039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-15 11:41:25.413050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.019 qpair failed and we were unable to recover it. 00:29:57.019 [2024-07-15 11:41:25.413445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-15 11:41:25.413455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.019 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.413904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.413914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.414291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.414303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.414684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.414693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.415101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.415110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.415473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.415483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.415882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.415890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.416367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.416404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.416824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.416835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.417237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.417248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.417632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.417642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.418079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.418089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.418469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.418478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.418764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.418774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.419097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.419105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.419518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.419527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.419888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.419898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.420368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.420405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.420796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.420808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.421114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.421136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.421447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.421457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.421849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.421858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.422325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.422786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.422799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.423181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.423191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.423575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.423585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.424012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.424021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.424393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.424403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.424772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.424782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.425188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.425553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.425563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.425844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.425853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.426137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.426147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.426517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.426526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.426994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.427003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.427212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.427224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.427535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.427544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.427941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.427951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.428335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.428344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.428629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.428638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.429028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.429037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.020 [2024-07-15 11:41:25.429414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.020 [2024-07-15 11:41:25.429424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.020 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.429825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.429834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.430233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.430246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.430617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.430627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.431044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.431053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.431460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.431469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.431844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.431853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.432212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.432222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.432582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.432591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.432975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.432985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.433388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.433397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.433754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.433763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.434169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.434178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.434553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.434563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.434945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.434955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.435335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.435345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.435660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.435670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.436052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.436061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.436431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.436441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.436839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.436848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.437215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.437224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.437613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.437623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.438005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.438015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.438414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.438424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.438829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.438838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.439230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.439240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.439611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.439620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.440018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.440027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.440422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.440431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.440897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.440911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.441301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.441310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.441697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.441714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.442120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.442133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.442518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.442527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.442811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.442820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.443236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.443246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.443619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.443628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.444034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.444043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.444417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.444426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.444675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.444685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.445068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.445078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.021 [2024-07-15 11:41:25.445446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.021 [2024-07-15 11:41:25.445456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.021 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.445660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.445671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.446063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.446072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.446448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.446457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.446840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.446849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.447246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.447259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.447640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.447649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.448031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.448041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.448414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.448424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.448763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.448773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.449089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.449098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.449492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.449502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.449854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.449863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.450233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.450242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.450660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.450669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.451067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.451076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.451464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.451474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.451861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.451870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.452155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.452165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.452550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.452559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.452833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.452843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.453277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.453286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.453653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.453662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.454025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.454034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.454236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.454246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.454612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.454621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.454985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.454993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.455445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.455454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.455814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.455823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.456208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.456221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.456645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.456654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.457026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.457035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.457357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.457367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.457705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.457714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.458119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.458131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.458563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.458572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.458956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.458965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.459324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.459334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.459804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.459813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.460281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.460318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.460622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.460640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.461030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.022 [2024-07-15 11:41:25.461039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.022 qpair failed and we were unable to recover it. 00:29:57.022 [2024-07-15 11:41:25.461447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.461457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.461826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.461835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.462197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.462207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.462511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.462520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.462902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.462911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.463277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.463287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.463684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.463693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.464117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.464132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.464404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.464414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.464793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.464802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.465176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.465186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.465467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.465476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.465863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.465872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.466234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.466244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.466637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.466648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.467032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.467042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.467435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.467444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.467814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.467823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.468132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.468142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.468507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.468516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.468875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.468884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.469248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.469257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.469584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.469593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.469983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.469992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.470356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.470366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.470745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.470755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.471140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.471149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.471542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.471551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.471913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.471923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.472251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.472260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.472651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.472660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.473021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.473030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.473416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.473425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.473827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.473836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.474108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.474117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.474470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.474480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.474876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.474885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.475287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.475297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.475700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.475709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.476096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.476105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.476457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.476466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.023 [2024-07-15 11:41:25.476876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.023 [2024-07-15 11:41:25.476885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.023 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.477365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.477402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.477810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.477822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.478203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.478213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.478479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.478489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.478891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.478900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.479267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.479279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.479696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.479706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.480125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.480135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.480544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.480553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.480965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.480974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.481456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.481493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.481903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.481916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.482464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.482502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.482889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.482905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.483378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.483415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.483825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.483837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.484332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.484369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.484779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.484791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.485193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.485203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.485602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.485612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.485965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.485974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.486359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.486369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.486759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.486768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.487140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.487150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.487542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.487552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.487992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.488001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.488275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.488286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.488670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.488680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.489039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.489048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.489430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.489440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.489838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.489847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-07-15 11:41:25.490253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.024 [2024-07-15 11:41:25.490263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.490646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.490656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.491111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.491120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.491468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.491478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.491890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.491899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.492186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.492196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.492576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.492585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.492945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.492954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.493367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.493376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.493853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.493865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.494342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.494379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.494803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.494815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.495180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.495190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.495613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.495623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.496014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.496023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.496413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.496423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.496824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.496833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.497236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.497246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.497564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.497573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.498032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.498041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.498461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.498471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.498851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.498860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.499222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.499231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.499636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.499645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.500060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.500069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.500541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.500551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.500952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.500961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.501412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.501422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.501797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.501807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.502292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.502329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.502737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.502748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.503114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.503130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.503524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.503534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.503937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.503946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.504432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.504469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.504779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.504797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.505174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.505185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.505554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.505564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.505944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.505953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.506292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.506303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.506713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.506723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.507104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.025 [2024-07-15 11:41:25.507113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-07-15 11:41:25.507494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.507504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.507863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.507872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.508368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.508405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.508824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.508836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.509238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.509249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.509653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.509662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.510048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.510058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.510458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.510468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.510872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.510886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.511290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.511300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.511673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.511682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.511960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.511970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.512357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.512366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.512749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.512758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.513138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.513149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.513584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.513593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.513966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.513977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.514453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.514489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.514905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.514916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.515372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.515409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.515834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.515845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.516339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.516376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.516788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.516800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.517169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.517180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.517586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.517596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.517886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.517896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.518264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.518273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.518635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.518645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.519038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.519047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.519413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.519429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.519828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.519837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.520197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.520207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.520567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.520576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.520965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.520974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.521378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.521388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.521757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.521767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.522161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.522172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.522564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.522987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.522997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.523359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.523369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.523675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.523685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-07-15 11:41:25.524074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.026 [2024-07-15 11:41:25.524084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.524456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.524467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.524827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.524836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.525208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.525218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.525563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.525573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.525981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.525990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.526360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.526369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.526764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.526774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.527159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.527169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.527524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.527534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.527830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.527839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.528118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.528143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.528555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.528564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.528933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.528942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.529330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.529340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.529730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.529740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.530110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.530120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.530526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.530536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.530940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.530950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.531431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.531468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.531890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.531902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.532412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.532449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.532849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.532861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.533362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.533400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.533790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.533803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.534222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.534233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.534637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.534648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.534941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.534951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.535356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.535367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.535777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.535787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.536173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.536183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.536608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.536618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.537004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.537015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.537447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.537457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.537820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.537829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.538202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.538216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.538601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.538611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.538967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.538976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.539332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.539342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.539727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.539736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.540143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.540152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.027 [2024-07-15 11:41:25.540581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.027 [2024-07-15 11:41:25.540590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.027 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.540861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.540870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.541270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.541280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.541703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.541712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.541997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.542006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.542376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.542385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.542773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.542782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.543181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.543190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.543617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.543626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.544012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.544021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.544487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.544496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.544897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.544906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.545094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.545105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.545471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.545481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.545867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.545876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.546245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.546255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.546543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.546552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.546939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.546948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.547311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.547320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.547734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.547743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.548020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.548029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.548463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.548472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.548833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.548842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.549210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.549219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.549598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.549607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.550069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.550078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.550455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.550465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.550840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.550849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.551225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.551234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.551632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.551642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.551930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.551939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.552334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.552343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.552708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.552719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.553095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.553117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.553545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.553566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.028 [2024-07-15 11:41:25.553956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.028 [2024-07-15 11:41:25.553983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.028 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.554385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.554402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.554787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.554797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.555019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.555030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.555269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.555279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.555648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.555658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.556084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.556094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.556491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.556500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.556809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.556819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.557226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.557235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.557623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.557633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.557916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.557926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.558313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.558323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.558709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.558718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.559106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.559115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.559559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.559569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.559947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.559956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.560371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.560408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.560947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.560958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.561489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.561526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.561909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.561921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.562512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.562548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.562968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.562980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.563480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.563517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.563913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.563925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.564449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.564485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.564910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.564921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.565408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.565450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.565874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.565886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.566401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.566438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.566866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.566877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.567384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.567421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.567860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.567872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.568352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.568388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.568860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.568871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.569355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.569391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.569815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.569826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.570202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.570213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.570586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.570602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.570853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.570865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.571134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.571144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.571459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.571468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.029 qpair failed and we were unable to recover it. 00:29:57.029 [2024-07-15 11:41:25.571783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.029 [2024-07-15 11:41:25.571793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.572071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.572081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.572454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.572466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.572874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.573275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.573284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.573673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.573682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.574054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.574063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.574453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.574463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.574887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.574896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.575276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.575286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.575681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.575691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.576108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.576118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.576501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.576511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.576688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.576698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.577089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.577098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.577491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.577501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.577866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.577875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.578175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.578186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.578555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.578564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.578936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.578944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.579227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.579236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.579635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.579644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.580004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.580012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.580465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.580474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.580841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.580851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.581255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.581264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.581659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.581670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.582041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.582050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.582455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.582465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.582853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.582862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.583256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.583266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.583721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.583730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.584092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.584101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.584479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.584488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.584940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.584949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.585456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.585494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.585869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.585881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.586362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.586398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.586761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.586773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.587071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.587081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.587467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.587477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.587926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.030 [2024-07-15 11:41:25.587935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.030 qpair failed and we were unable to recover it. 00:29:57.030 [2024-07-15 11:41:25.588419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.588456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.588830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.588844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.589230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.589240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.589638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.589648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.590038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.590049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.590448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.590458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.590740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.590750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.591142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.591151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.591525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.591534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.591909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.591918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.592282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.592291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.592720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.592733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.593097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.593107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.593495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.593505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.593898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.593908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.594350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.594359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.594728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.594737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.595134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.595144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.595594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.595603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.595977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.595987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.596301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.596311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.596727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.596736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.597113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.597124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.597501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.597510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.597872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.597881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.598408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.598445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.598660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.598673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.599103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.599113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.599568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.599579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.599859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.599870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.600251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.600261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.600582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.600592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.601022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.601031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.601419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.601429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.601814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.601823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.602192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.602202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.602603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.602613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.603021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.603030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.603441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.603451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.603855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.603865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.604277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.604286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.031 qpair failed and we were unable to recover it. 00:29:57.031 [2024-07-15 11:41:25.604662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.031 [2024-07-15 11:41:25.604672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.605057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.605067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.605480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.605852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.605862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.606229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.606239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.606626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.606636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.607022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.607031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.607338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.607348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.607746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.607756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.608037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.608047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.608372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.608382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.608839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.608851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.609262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.609272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.609673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.609682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.610044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.610053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.610440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.610449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.610940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.610950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.611339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.611349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.611740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.611749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.612018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.612028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.612404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.612413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.612804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.612813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.613218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.613228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.613586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.613597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.614006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.614017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.614315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.614326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.614636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.614645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.615015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.615025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.615406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.615424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.615812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.615821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.616243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.616253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.616632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.616642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.617049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.617058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.617363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.617372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.617647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.032 [2024-07-15 11:41:25.617656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.032 qpair failed and we were unable to recover it. 00:29:57.032 [2024-07-15 11:41:25.618071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.618081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.618483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.618494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.618881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.618891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.619263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.619274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.619555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.619564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.619872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.619881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.620173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.620182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.620567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.620576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.620984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.620993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.621415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.621425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.621885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.621894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.622299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.622664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.622674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.623071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.623080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.623475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.623485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.623950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.623959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.624363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.624372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.624755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.624764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.625125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.625135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.625523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.625533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.625919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.625928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.626320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.626329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.626720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.626729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.627113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.627125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.627431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.627440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.627896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.627906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.628380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.628417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.628837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.628848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.629142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.629153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.629594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.630009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.630018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.630395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.630405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.630889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.630899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.631391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.631400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.631829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.631839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.632179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.632189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.632577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.632585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.632926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.632936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.633443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.633453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.633827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.633835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.634228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.634237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.033 qpair failed and we were unable to recover it. 00:29:57.033 [2024-07-15 11:41:25.634612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.033 [2024-07-15 11:41:25.634621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.635016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.635025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.635456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.635465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.635850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.635862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.636239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.636249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.636637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.636646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.637039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.637048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.637340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.637350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.637746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.637755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.638142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.638152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.638548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.638558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.638953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.638961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.639365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.639374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.639745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.639755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.640126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.640135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.640503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.640512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.640888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.640898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.641361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.641398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.641791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.641803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.642190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.642201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.642647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.642657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.642950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.642960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.643340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.643350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.643638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.643647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.644033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.644042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.644432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.644442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.644830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.644840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.645139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.645150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.645556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.645567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.645956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.645965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.646350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.646361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.646750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.646760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.647040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.647050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.647349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.647359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.647657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.647667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.648055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.648066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.648456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.648466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.648851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.648862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.649139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.649149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.649520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.649531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.649940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.649950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.650329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.034 [2024-07-15 11:41:25.650339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.034 qpair failed and we were unable to recover it. 00:29:57.034 [2024-07-15 11:41:25.650716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.650726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.650974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.650986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.651384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.651395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.651779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.651789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.652172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.652182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.652523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.652533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.652941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.652952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.653337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.653348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.653734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.653743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.654130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.654140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.654528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.654539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.654821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.654831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.655126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.655136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.655516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.655526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.655929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.655939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.656410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.656446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.656848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.656860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.657386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.657422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.657831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.657842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.658235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.658245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.658640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.658648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.659074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.659083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.659392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.659401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.659821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.659830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.660214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.660224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.660627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.660637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.661041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.661050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.661506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.661517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.661897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.661907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.662131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.662148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.662508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.662519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.662900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.662910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.663389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.663426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.663826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.663838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.664366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.664405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.664843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.664855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.665216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.665226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.665594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.665603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.665984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.665993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.666289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.666299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.666708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.666717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.035 qpair failed and we were unable to recover it. 00:29:57.035 [2024-07-15 11:41:25.667102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.035 [2024-07-15 11:41:25.667112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.667310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.667324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.667612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.667622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.667912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.667922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.668317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.668327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.668683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.668693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.669080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.669090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.669427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.669436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.669810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.670266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.670275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.670747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.670756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.671205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.671214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.671493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.671502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.671879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.671888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.672295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.672305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.672697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.672706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.673110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.673119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.673526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.673536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.673928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.673937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.674440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.674476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.674886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.674898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.675264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.675275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.675620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.675629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.676033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.676042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.676330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.676340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.676739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.676748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.677039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.677048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.677413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.677422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.677790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.677800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.678213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.678222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.678622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.678632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.679024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.679033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.679229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.679240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.679553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.679562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.679958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.679967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.680287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.680297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.680708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.680717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.681064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.681073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.681457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.681466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.681729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.681739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.682058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.682067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.036 qpair failed and we were unable to recover it. 00:29:57.036 [2024-07-15 11:41:25.682477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.036 [2024-07-15 11:41:25.682486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.682853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.682863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.683272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.683281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.683660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.683669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.684078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.684088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.684460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.684470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.684875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.684885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.685232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.685242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.685660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.685669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.686046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.686055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.686443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.686454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.686841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.686850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.687227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.687239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.687630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.687640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.688025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.688034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.688308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.688320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.688689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.688698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.689101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.689109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.689490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.689500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.689802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.689812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.690159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.690169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.690557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.690567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.690927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.690936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.691323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.691333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.691742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.691751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.692150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.692160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.692543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.692561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.692963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.692972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.693212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.693222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.693609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.693618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.693978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.693987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.694359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.694369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.694784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.694794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.695177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.695187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.695574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.695583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.037 [2024-07-15 11:41:25.695970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.037 [2024-07-15 11:41:25.695980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.037 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.696370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.696380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.696763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.696774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.697163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.697172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.697556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.697566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.697966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.697976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.698345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.698355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.698726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.698734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.699120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.699134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.699447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.699456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.699830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.699839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.700248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.700257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.700637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.700646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.701004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.701013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.701509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.701518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.701822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.701831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.702252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.702261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.702633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.702642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.702926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.702935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.703324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.703334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.703693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.703703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.704015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.704029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.704420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.704430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.038 [2024-07-15 11:41:25.704818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.038 [2024-07-15 11:41:25.704827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.038 qpair failed and we were unable to recover it. 00:29:57.310 [2024-07-15 11:41:25.705275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.310 [2024-07-15 11:41:25.705286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.310 qpair failed and we were unable to recover it. 00:29:57.310 [2024-07-15 11:41:25.705693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.310 [2024-07-15 11:41:25.705702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.310 qpair failed and we were unable to recover it. 00:29:57.310 [2024-07-15 11:41:25.706087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.310 [2024-07-15 11:41:25.706095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.310 qpair failed and we were unable to recover it. 00:29:57.310 [2024-07-15 11:41:25.706473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.310 [2024-07-15 11:41:25.706483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.310 qpair failed and we were unable to recover it. 00:29:57.310 [2024-07-15 11:41:25.706864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.310 [2024-07-15 11:41:25.706874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.310 qpair failed and we were unable to recover it. 00:29:57.310 [2024-07-15 11:41:25.707343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.310 [2024-07-15 11:41:25.707353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.310 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.707718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.707728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.708004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.708013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.708369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.708379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.708759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.708769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.709146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.709156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.709527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.709536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.709932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.710314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.710324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.710712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.710722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.711164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.711174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.711526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.711536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.711901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.711910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.712318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.712327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.712697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.712706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.712920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.712933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.713349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.713359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.713721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.713730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.714114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.714127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.714519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.714532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.714908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.714917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.715313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.715322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.715623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.715634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.716083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.716092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.716466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.716475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.716859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.716868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.717291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.717300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.717688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.717698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.718104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.718113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.718492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.718501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.718879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.718888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.719366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.719403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.719638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.719652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.719935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.719945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.720320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.720331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.311 [2024-07-15 11:41:25.720722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.311 [2024-07-15 11:41:25.720732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.311 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.721116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.721131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.721505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.721515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.721882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.721892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.722366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.722404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.722815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.722827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.723232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.723243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.723621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.723631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.723920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.723936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.724324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.724334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.724697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.724706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.725091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.725100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.725486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.725496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.725877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.725887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.726267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.726276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.726668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.726678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.726964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.726974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.727265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.727275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.727653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.727663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.728090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.728101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.728581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.728591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.728971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.728982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.729436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.729473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.729878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.729890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.730325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.730362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.730777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.730793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.731159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.731170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.731549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.731559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.731952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.731961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.732333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.732344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.732660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.732670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.733055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.733065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.733456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.733466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.733830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.733839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.734244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.734253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.734650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.734660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.735045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.735056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.735445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.735455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.735886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.735896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.736279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.736289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.736675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.736684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.312 [2024-07-15 11:41:25.737100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.312 [2024-07-15 11:41:25.737109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.312 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.737499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.737510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.737893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.737903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.738384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.738420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.738832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.738843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.739205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.739215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.739610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.739620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.740014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.740024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.740246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.740258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.740658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.740667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.741060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.741069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.741472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.741487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.741764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.741775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.742213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.742223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.742592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.742601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.742904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.742913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.743321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.743732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.743743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.744132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.744143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.744616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.744626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.745038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.745047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.745420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.745429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.745839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.745848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.746055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.746066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.746449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.746459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.746846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.746856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.747221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.747231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.747609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.747618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.748026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.748036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.748436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.748447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.748828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.748838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.749152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.749162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.749570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.749579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.749981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.749990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.750353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.750363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.750755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.750764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.751139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.751149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.751520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.751529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.751889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.751899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.752288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.752298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.752554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.752565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.752951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.313 [2024-07-15 11:41:25.752960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.313 qpair failed and we were unable to recover it. 00:29:57.313 [2024-07-15 11:41:25.753338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.753348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.753730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.753740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.754152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.754161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.754536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.754546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.754833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.754842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.755242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.755252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.755657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.755666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.755958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.756353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.756362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.756729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.756739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.757226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.757239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.757607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.757616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.757976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.757985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.758370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.758380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.758773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.758783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.759185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.759195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.759434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.759444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.759743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.759752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.760156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.760165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.760541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.760550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.760921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.760930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.761396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.761406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.761768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.761777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.762145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.762154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.762544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.762554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.762954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.762963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.763335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.763345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.763730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.763739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.764137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.764148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.764546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.764555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.764962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.764971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.765429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.765466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.765890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.765902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.766399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.766435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.766850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.766861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.767342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.767379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.767802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.314 [2024-07-15 11:41:25.767813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.314 qpair failed and we were unable to recover it. 00:29:57.314 [2024-07-15 11:41:25.768024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.768040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.768444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.768454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.768820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.768829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.769214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.769225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.769621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.769630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.769999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.770009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.770398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.770408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.770768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.770776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.771180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.771190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.771561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.771570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.771937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.771947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.772306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.772316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.772725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.772734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.773101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.773110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.773428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.773438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.773824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.773833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.774193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.774203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.774587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.774597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.774979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.774988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.775396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.775405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.775803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.775813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.776182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.776192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.776561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.776570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.776939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.776948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.777332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.777342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.777746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.777756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.778131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.778142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.778527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.778536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.778938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.778948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.779440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.779477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.779787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.779799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.780091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.780100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.315 [2024-07-15 11:41:25.780558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.315 [2024-07-15 11:41:25.780569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.315 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.780975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.780985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.781461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.781499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.781806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.781818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.782344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.782381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.782880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.782891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.783364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.783401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.783824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.783836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.784196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.784206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.784587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.784602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.784987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.784997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.785357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.785366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.785748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.785758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.786052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.786061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.786421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.786431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.786844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.786853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.787214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.787224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.787613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.787622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.788006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.788015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.788411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.788421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.788785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.788794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.789193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.789204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.790207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.790229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.790627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.790637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.790936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.790945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.791333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.791343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.791701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.791710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.792131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.792141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.792501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.792510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.792893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.792903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.793288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.793298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.793604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.793614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.794010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.794019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.794300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.794311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.794710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.794720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.795120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.795136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.795508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.795517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.795920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.795929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.316 [2024-07-15 11:41:25.796301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.316 [2024-07-15 11:41:25.796310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.316 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.796721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.796731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.797115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.797130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.797528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.797537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.797936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.797946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.798329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.798365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.798781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.798793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.799303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.799340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.799732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.799744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.800119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.800138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.800509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.800518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.800923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.800933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.801421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.801458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.801882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.801894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.802386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.802423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.802831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.802843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.803341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.803378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.803662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.803674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.803958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.803967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.804352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.804362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.804787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.804796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.805200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.805211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.805605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.805615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.806017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.806028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.806429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.806438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.806844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.806853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.807167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.807177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.807584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.807594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.807880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.807891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.808142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.808154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.808558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.808567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.808972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.808981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.809352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.809361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.809700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.809710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.810098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.810107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.810473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.810483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.810886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.810895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.811295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.811305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.811678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.811689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.812100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.812112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.812561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.812571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.812950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.317 [2024-07-15 11:41:25.812959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.317 qpair failed and we were unable to recover it. 00:29:57.317 [2024-07-15 11:41:25.813452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.813489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.813896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.813909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.814414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.814451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.814757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.814770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.815180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.815190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.815563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.815573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.815960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.815970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.816355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.816366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.816749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.816758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.817145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.817156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.817543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.817552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.817953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.817963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.818403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.818413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.818783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.818792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.819068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.819077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.819457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.819467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.819866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.819876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.820283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.820294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.820685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.820694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.820939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.820949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.821238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.821248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.821618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.821627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.821936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.821945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.822328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.822338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.822705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.822714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.823120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.823135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.823516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.823525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.823930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.823939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.824425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.824462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.824869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.824882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.825402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.825440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.825820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.825832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.826222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.826232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.826594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.826604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.827017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.827027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.827420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.827429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.827794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.827803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.318 [2024-07-15 11:41:25.828213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.318 [2024-07-15 11:41:25.828223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.318 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.828611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.828626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.829017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.829026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.829236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.829248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.829655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.829664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.830021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.830030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.830442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.830452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.830836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.830846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.831246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.831257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.831654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.831663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.832031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.832041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.832417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.832427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.832830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.832839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.833229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.833239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.833611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.833620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.834021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.834030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.834463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.834473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.834839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.834849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.835238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.835247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.835636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.835646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.836050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.836060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.836444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.836453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.836706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.836715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.837146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.837157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.837522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.837531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.837893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.837902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.838282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.838293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.838705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.838714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.839079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.839090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.839500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.839510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.839911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.839921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.840277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.840287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.840666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.840676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.841060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.841069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.841498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.319 [2024-07-15 11:41:25.841508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.319 qpair failed and we were unable to recover it. 00:29:57.319 [2024-07-15 11:41:25.841870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.841880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.842293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.842303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.842700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.842709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.843092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.843102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.843488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.843499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.843901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.843911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.844435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.844472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.844902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.844914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.845423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.845461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.845875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.845886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.846411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.846447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.846872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.846884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.847344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.847387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.847797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.847808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.848133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.848144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.848535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.848545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.848932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.848941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.849437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.849474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.849956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.849969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.850440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.850477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.850911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.850922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.851390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.851427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.851847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.851859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.852369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.852405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.852714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.852726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.853143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.853153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.853576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.853585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.853958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.853968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.854367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.854378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.854646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.854655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.855025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.855034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.855406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.855416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.855800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.855810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.856223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.856233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.856597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.856610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.857017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.857026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.857458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.857467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.857825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.320 [2024-07-15 11:41:25.857834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.320 qpair failed and we were unable to recover it. 00:29:57.320 [2024-07-15 11:41:25.858232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.858241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.858533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.858550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.858983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.858992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.859382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.859391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.859764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.859774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.860163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.860172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.860597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.860606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.860983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.860992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.861379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.861389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.861775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.861784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.862083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.862093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.862479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.862489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.862767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.862783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.863151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.863160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.863521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.863530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.863892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.863901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.864173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.864182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.864587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.864596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.864988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.864997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.865415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.865424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.865858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.865868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.866084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.866096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.866550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.866560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.866969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.866983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.867458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.867495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.867913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.867925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.868351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.868388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.868811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.868823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.869349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.869387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.869703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.869715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.869928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.869940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.870319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.870329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.870742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.870752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.871117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.871131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.871520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.871529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.871897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.871906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.872422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.872459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.872875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.872888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.873320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.873357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.873780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.873792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.874191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.321 [2024-07-15 11:41:25.874202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.321 qpair failed and we were unable to recover it. 00:29:57.321 [2024-07-15 11:41:25.874588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.874597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.875031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.875040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.875404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.875415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.875820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.875829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.876208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.876218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.876626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.876635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.877027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.877037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.877459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.877470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.877873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.877882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.878290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.878300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.878584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.878594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.879009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.879018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.879391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.879401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.879770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.880164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.880173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.880555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.880564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.880949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.880958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.881249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.881265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.881647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.881657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.882034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.882044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.882446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.882456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.882846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.882855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.883246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.883255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.883698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.883709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.884070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.884080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.884450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.884460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.884852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.884861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.885268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.885278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.885677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.886057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.886067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.886474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.886484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.886870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.886879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.887286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.887295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.887660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.887670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.888053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.888063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.888319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.888330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.888726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.888736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.889128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.889139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.889521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.889530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.889899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.889909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.890436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.890475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.322 qpair failed and we were unable to recover it. 00:29:57.322 [2024-07-15 11:41:25.890886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.322 [2024-07-15 11:41:25.890898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.891266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.891277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.891696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.891705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.892106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.892115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.892493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.892504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.892887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.892897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.893411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.893448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.893870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.893882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.894371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.894408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.894879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.894895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.895295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.895308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.895776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.895785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.896288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.896325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.896653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.896665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.897068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.897078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.897466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.897475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.897874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.897884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.898289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.898299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.898702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.898712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.899116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.899134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.899510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.899520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.899886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.899895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.900396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.900433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.900810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.900823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.901202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.901213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.901601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.901611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.902016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.902025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.902419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.902428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.902831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.902839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.903213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.903223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.903628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.903638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.904013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.904023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.904418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.904427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.904832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.904842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.905245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.905254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.905638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.905647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.906030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.906040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.906435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.906445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.906844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.906854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.907255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.907265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.907630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.323 [2024-07-15 11:41:25.907639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.323 qpair failed and we were unable to recover it. 00:29:57.323 [2024-07-15 11:41:25.907902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.907913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.908318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.908328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.908688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.908697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.909060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.909068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.909454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.909464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.909835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.909845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.910230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.910241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.910641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.910651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.911036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.911045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.911417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.911429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.911820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.911829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.912223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.912234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.912633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.912642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.913012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.913021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.913327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.913337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.913716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.913725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.914077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.914086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.914501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.914511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.914908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.914917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.915322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.915331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.915712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.915722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.916132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.916142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.916531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.916540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.916929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.916938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.917350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.917387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.917788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.917800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.918207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.918218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.918589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.918599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.918960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.918969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.919381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.919391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.919789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.919799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.920183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.920194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.920601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.920610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.921012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.921021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-15 11:41:25.921417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.324 [2024-07-15 11:41:25.921426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.921826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.921836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.922127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.922138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.922531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.922540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.922976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.922985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.923469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.923507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.923923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.923936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.924331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.924368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.924778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.924789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.925327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.925364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.925781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.925793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.926168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.926178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.926633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.926643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.927049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.927060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.927453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.927463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.927832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.927842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.928227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.928237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.928605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.928615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.928889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.928899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.929286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.929295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.929696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.929705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.930070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.930080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.930469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.930479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.930883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.930892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.931293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.931303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.931693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.931702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.932104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.932113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.932483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.932494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.932900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.932910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.933468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.933506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.933905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.933917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.934435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.934472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.934890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.934902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.935324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.935361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.935657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.935669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.936070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.936079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.936371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.936382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.936758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.936767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.937181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.937191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.937593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.937603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.937963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.937972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.938375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.325 [2024-07-15 11:41:25.938384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-15 11:41:25.938775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.938784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.939184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.939198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.939607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.939617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.940009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.940019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.940427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.940437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.940839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.940850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.941236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.941245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.941639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.941648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.942044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.942053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.942424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.942434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.942819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.942828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.943217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.943226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.943613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.943624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.944057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.944066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.944444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.944453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.944852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.944862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.945225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.945235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.945630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.945640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.946024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.946033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.946419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.946428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.946839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.946849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.947210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.947220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.947608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.947619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.947896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.947906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.948160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.948172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.948472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.948481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.948870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.948879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.949265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.949275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.949664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.949674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.950075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.950085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.950453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.950463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.950867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.950876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.951239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.951256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.951683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.951694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.951917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.951930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.952354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.952364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.952724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.952733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.953140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.953150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.953533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.953543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.953942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.953951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.954262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.954274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-15 11:41:25.954865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.326 [2024-07-15 11:41:25.954881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.955271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.955285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.955670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.955679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.956078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.956087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.956537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.956547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.956832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.956850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.957236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.957246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.957631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.957640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.958061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.958070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.958454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.958464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.958868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.958877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.959281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.959291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.959702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.959712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.960010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.960020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.960420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.960429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.960830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.960840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.961241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.961251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.961644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.961653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.962056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.962065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.962479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.962488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.962848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.962858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.963242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.963251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.963546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.963555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.963965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.963974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.964357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.964367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.964745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.964754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.965141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.965151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.965565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.965574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.966020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.966032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.966409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.966418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.966806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.966815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.967201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.967211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.967570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.967580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.967963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.967972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.968357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.968763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.968772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.969140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.969150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.969551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.969560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.969959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.969968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.970451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.970461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.970732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.970741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.327 [2024-07-15 11:41:25.971132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-07-15 11:41:25.971142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.327 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.971560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.971569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.971955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.971964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.972440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.972477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.972787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.972798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.973188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.973198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.973584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.973593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.974008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.974017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.974247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.974260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.974646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.974655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.975058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.975068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.975507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.975517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.975887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.975896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.976229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.976239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.976627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.976637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.977056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.977065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.977441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.977452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.977834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.977843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.978168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.978178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.978583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.978592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.978960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.978969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.979261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.979271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.979624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.979634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.979998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.980007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.980482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.980491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.980845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.980854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.981243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.981254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.981636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.981646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.981855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.981868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.982149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.982160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.982547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.982557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.982919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.982929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.983321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.983331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.983698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.983707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.984110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.984119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.328 [2024-07-15 11:41:25.984493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-07-15 11:41:25.984503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.328 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.984790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.984800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.985181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.985191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.985572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.985581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.985981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.985991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.986383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.986392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.986792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.986801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.987129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.987139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.987504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.987519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.987907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.987917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.988357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.988394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.988717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.988730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.989142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.989154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.989558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.989569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.989743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.989752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.990110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.990119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.990532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.990542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.990937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.990947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.991331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.991341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.991703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.991712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.992079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.992093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.992519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.992530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.992933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.992942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.993455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.993493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.993977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.993989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.994448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.994485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.994916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.994929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.995419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.995458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.995862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.995875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.996344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.996381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.996806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.996818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.997182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.997192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.997580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.997590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.997978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.997987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.998397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.998407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.998811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.998820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.999343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.999380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:25.999797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:25.999809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:26.000172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:26.000183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:26.000561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:26.000571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.329 [2024-07-15 11:41:26.000935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-07-15 11:41:26.000944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.329 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.001379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.001390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.001766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.001779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.002163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.002173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.002569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.002580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.002971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.002980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.003375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.003385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.003770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.003780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.004152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.004162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.004548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.004557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.004924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.004934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.005337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.005347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.005715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.005724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.006092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.006101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.006524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.006534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.006890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.006899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.007312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.007350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.007756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.007768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.008174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.008185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.008494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.008505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.008924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.008933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.009347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.009362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.009753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.009763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.010013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.010022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.010384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.010395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.010787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.010796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.011194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.011203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.604 [2024-07-15 11:41:26.011587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.604 [2024-07-15 11:41:26.011596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.604 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.012000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.012009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.012288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.012297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.012693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.012703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.013014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.013025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.013400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.013410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.013865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.013874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.014177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.014187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.014585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.014594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.014957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.014966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.015352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.015362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.015746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.015755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.016044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.016053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.016438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.016448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.016809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.016819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.017227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.017237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.017592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.017601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.018059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.018068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.018498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.018508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.018907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.018917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.019277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.019288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.019660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.019671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.020050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.020059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.020484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.020493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.020896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.020905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.021317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.021327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.021749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.021759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.022142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.022152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.022521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.022530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.022898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.022907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.023316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.023326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.023739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.023749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.024137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.024148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.024532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.024542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.024903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.024912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.025319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.025329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.025718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.025728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.026140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.026150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.026534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.026544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.026978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.026987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.027350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.027360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.605 [2024-07-15 11:41:26.027737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-07-15 11:41:26.027747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.605 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.028134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.028144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.028495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.028505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.028889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.028898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.029166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.029175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.029483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.029492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.029861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.029871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.030279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.030289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.030661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.030670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.030954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.030963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.031258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.031267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.031653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.031662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.031980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.032378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.032388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.032753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.032762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.033160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.033170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.033551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.033560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.033929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.033939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.034963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.034985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.035348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.035359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.035774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.035783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.036193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.036206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.036590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.036599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.036870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.036879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.037252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.037261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.037661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.037670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.038034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.038470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.038479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.038822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.038831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.039228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.039239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.039491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.039502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.039886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.039897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.040272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.040282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.040679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.040689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.041073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.041083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.041464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.041474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.041837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.041847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.042218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.042227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.042609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.042619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.043028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.043037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.043416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.043425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.606 [2024-07-15 11:41:26.043832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-07-15 11:41:26.043841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.606 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.044202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.044211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.044542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.044551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.044948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.044957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.045326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.045336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.045739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.045749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.045950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.045959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.046343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.046353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.046758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.046767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.047200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.047209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.047626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.047635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.048030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.048039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.048421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.048431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.048802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.048811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.049210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.049221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.049474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.049484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.049858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.049867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.050229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.050239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.050651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.050661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.051046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.051056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.051350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.051360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.051646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.051656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.052038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.052048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.052429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.052439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.052812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.052821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.053227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.053237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.053457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.053469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.053895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.053905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.054316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.054326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.054685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.054694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.055066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.055076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.055489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.055498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.055699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.055709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.056111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.056120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.056490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.056500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.056760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.056770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.057193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.057203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.057611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.057620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.057982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.057991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.058383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.058393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.058800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.058809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.607 [2024-07-15 11:41:26.059171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-07-15 11:41:26.059180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.607 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.059547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.059558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.060000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.060010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.060385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.060395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.060782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.060792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.061198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.061208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.061508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.061518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.061910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.061922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.062283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.062293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.062679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.062689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.063076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.063085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.063465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.063475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.063834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.063843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.064289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.064299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.064674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.064683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.065093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.065102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.065467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.065478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.065867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.065876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.066258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.066268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.066678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.066687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.067048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.067057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.067471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.067481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.067897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.067908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.068312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.068322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.068686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.068695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.069079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.069088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.069453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.069463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.069866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.069876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.070258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.070267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.070687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.070696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.071061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.071070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.071502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.071512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.608 qpair failed and we were unable to recover it. 00:29:57.608 [2024-07-15 11:41:26.071902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.608 [2024-07-15 11:41:26.071911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.072482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.072519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.072973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.072985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.073474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.073510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.073933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.073944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.074436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.074473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.074896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.074909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.075417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.075454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.075813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.075825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.076309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.076346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.076767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.076779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.077156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.077166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.077570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.077579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.077984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.077994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.078402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.078412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.078827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.078837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.079140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.079150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.079535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.079544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.079929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.079937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.080289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.080299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.080673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.080682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.081084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.081093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.081499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.081508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.081950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.081959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.082432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.082469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.082863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.082876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.083351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.083388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.083811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.083823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.084225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.084236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.084617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.084626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.085042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.085052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.085264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.085278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.085576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.085587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.085971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.085980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.086400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.086410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.086816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.086826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.087221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.087232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.087483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.087494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.087878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.088252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.088263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.088540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.088549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.609 [2024-07-15 11:41:26.088936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.609 [2024-07-15 11:41:26.088946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.609 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.089312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.089321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.089692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.089706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.090091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.090101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.090504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.090514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.090824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.090834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.091223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.091233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.091598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.091607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.092017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.092026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.092418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.092427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.092726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.092736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.093120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.093143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.093517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.093527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.093926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.093936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.094345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.094355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.094753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.094762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.095131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.095141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.095472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.095482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.095776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.095785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.096173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.096183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.096536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.096553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.096935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.096945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.097234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.097244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.097663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.097672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.098035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.098045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.098434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.098445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.098702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.098712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.099129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.099139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.099576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.099585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.099942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.099951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.100457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.100494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.100911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.100924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.101371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.101408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.101822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.101834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.102240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.102251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.102505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.102516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.102941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.102951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.103353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.103368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.103732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.103741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.104095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.104105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.104402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.104413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.104815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.610 [2024-07-15 11:41:26.104824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.610 qpair failed and we were unable to recover it. 00:29:57.610 [2024-07-15 11:41:26.105222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.105232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.105648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.105661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.106034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.106043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.106494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.106504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.106680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.106692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.107082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.107091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.107463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.107473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.107832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.107841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.108213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.108223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.108603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.108614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.108999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.109009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.109410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.109419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.109690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.109699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.110082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.110091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.110531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.110542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.110906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.110916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.111316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.111326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.111717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.111726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.112119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.112134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.112499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.112509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.112912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.112922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.113401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.113438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.113857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.113869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.114232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.114244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.114746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.114756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.115129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.115139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.115429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.115438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.115829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.115839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.116315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.116356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.116707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.116720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.117106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.117117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.117506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.117517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.117918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.117927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.118401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.118438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.118855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.118867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.119364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.119400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.119811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.119823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.120227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.120238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.120640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.120650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.121037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.121048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.121431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.121442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.611 qpair failed and we were unable to recover it. 00:29:57.611 [2024-07-15 11:41:26.121827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.611 [2024-07-15 11:41:26.121837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.122228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.122238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.122518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.122528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.122945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.122954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.123314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.123325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.123708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.123718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.124144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.124154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.124521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.124530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.124909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.124918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.125248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.125259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.125642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.125652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.126013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.126023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.126323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.126334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.126715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.126724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.127130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.127139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.127530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.127540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.127923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.127933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.128329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.128339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.128724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.128733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.129134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.129144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.129583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.129593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.129905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.129914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.130204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.130215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.130603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.130612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.130972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.130981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.131347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.131356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.131745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.131754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.132136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.132147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.132536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.132548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.132842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.132852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.133253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.133262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.133627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.133636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.134031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.134040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.134407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.134417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.134801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.134810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.135213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.135222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.135620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.135629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.612 [2024-07-15 11:41:26.135995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.612 [2024-07-15 11:41:26.136004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.612 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.136322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.136333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.136734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.136743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.137154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.137164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.137570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.137580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.137940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.137949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.138358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.138368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.138685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.138694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.139079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.139089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.139455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.139464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.139901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.139910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.140276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.140286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.140668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.140678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.141060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.141069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.141440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.141449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.141741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.141750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.142132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.142142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.142502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.142511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.142898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.142910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.143391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.143429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.143848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.143861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.144248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.144258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.144708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.144717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.144962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.144971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.145317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.145327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.145730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.145739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.146106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.146115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.146518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.146528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.146919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.146929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.147451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.147489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.147889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.147900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.148387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.148423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.148669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.148683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.148945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.148955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.149367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.149378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.149780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.149789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.150192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.150202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.150577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.150586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.150953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.150962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.151343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.613 [2024-07-15 11:41:26.151354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.613 qpair failed and we were unable to recover it. 00:29:57.613 [2024-07-15 11:41:26.151738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.151747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.152139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.152150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.152538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.152548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.152952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.152961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.153442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.153479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.153843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.153856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.154374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.154411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.154819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.154832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.155197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.155208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.155567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.155576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.155983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.155993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.156379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.156389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.156750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.156759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.157074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.157084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.157498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.157508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.157906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.157915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.158387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.158424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.158845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.158856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.159364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.159409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.159817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.159833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.160195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.160206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.160628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.161037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.161046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.161439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.161450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.161857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.161867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.162230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.162240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.162609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.162618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.163004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.163014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.163423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.163434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.163712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.163722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.164078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.164089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.164474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.164484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.164880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.164890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.165280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.165290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.165665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.165674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.166060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.166069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.166458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.166468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.166831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.166840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.167206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.167216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.167599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.167609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.167860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.167871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.614 qpair failed and we were unable to recover it. 00:29:57.614 [2024-07-15 11:41:26.168232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.614 [2024-07-15 11:41:26.168243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.168610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.168620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.169023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.169032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.169410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.169420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.169794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.169803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.170003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.170014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.170382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.170392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.170764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.170773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.171153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.171163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.171557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.171566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.171961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.171970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.172189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.172202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.172588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.172598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.172955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.172964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.173368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.173377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.173784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.173793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.174194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.174203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.174572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.174582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.174989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.174998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.175403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.175415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.175795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.175804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.176174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.176184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.176564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.176573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.176938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.176948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.177354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.177364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.177738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.177747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.178019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.178028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.178448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.178457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.178818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.178827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.179207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.179217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.179603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.179613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.179988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.179996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.180410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.180419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.180786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.180795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.181182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.181191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.181616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.181625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.181938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.181947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.182321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.182330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.182692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.182700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.183062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.183071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.183443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.183452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.183696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.615 [2024-07-15 11:41:26.183707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.615 qpair failed and we were unable to recover it. 00:29:57.615 [2024-07-15 11:41:26.184099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.184108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.184538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.184548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.184832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.184841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.185223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.185233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.185619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.185631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.186115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.186128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.186513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.186522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.186891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.186900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.187260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.187270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.187462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.187814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.187824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.188221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.188231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.188631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.189055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.189064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.189489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.189498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.189904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.189914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.190322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.190333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.190609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.190618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.191041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.191050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.191481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.191490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.191786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.191796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.192187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.192196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.192653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.192662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.193031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.193040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.193443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.193452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.193843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.193852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.194209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.194219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.194627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.194637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.195000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.195009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.195382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.195392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.195783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.195791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.196134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.196144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.196536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.196546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.196911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.196920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.197288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.197298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.197685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.197694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.198092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.198102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.198417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.198786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.198796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.199204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.199213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.199610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.199618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.616 qpair failed and we were unable to recover it. 00:29:57.616 [2024-07-15 11:41:26.200009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.616 [2024-07-15 11:41:26.200019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.200415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.200424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.200828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.200838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.201238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.201248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.201684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.201694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.202052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.202061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.202440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.202450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.202838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.202847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.203207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.203217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.203708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.203717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.204125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.204136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.204518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.204527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.204893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.204903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.205318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.205356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.205650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.205661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.206064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.206074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.206446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.206456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.206872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.206882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.207302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.207312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.207579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.207590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.207974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.207983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.208381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.208392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.208803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.208813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.209192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.209201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.209587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.209597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.210006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.210016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.210394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.210403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.210765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.210774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.211115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.211131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.211547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.211557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.211931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.211940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.212383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.212425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.212837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.212849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.213261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.213271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.213635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.617 [2024-07-15 11:41:26.213645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.617 qpair failed and we were unable to recover it. 00:29:57.617 [2024-07-15 11:41:26.214053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.214063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.214443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.214453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.214719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.214733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.215137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.215443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.215452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.215812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.215821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.216181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.216192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.216600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.216609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.216909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.216918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.217313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.217322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.217685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.217694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.217920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.217931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.218189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.218200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.218618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.218628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.219047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.219056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.219460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.219470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.219831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.219840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.220256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.220267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.220667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.220676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.221039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.221048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.221417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.221426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.221811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.221820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.222205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.222215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.222581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.222590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.222982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.222991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.223473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.223483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.223846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.223856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.224260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.224270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.224672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.224682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.225066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.225076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.225485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.225495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.225902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.225911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.226399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.226436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.226855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.226866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.227263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.227274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.227643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.227652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.228039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.228048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.228447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.228462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.228851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.228861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.229277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.229287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.229653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.229662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.618 [2024-07-15 11:41:26.229960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.618 [2024-07-15 11:41:26.229969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.618 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.230355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.230365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.230768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.230777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.231189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.231199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.231575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.231584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.231954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.231963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.232344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.232355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.232713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.232723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.233107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.233117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.233501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.233511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.233914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.233924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.234046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.234057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.234443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.234453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.234738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.234748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.235163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.235173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.235582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.235591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.235994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.236003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.236219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.236229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.236600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.236609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.236971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.236980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.237349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.237358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.237839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.237848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.238249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.238258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.238620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.238632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.239018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.239028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.239433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.239442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.239846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.239855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.240216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.240226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.240488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.240498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.240876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.240886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.241294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.241304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.241698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.241708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.242095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.242105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.242480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.242490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.242913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.242923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.243307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.243317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.243721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.243731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.244134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.244145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.244526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.244535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.244931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.244941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.245223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.245233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.619 [2024-07-15 11:41:26.245611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.619 [2024-07-15 11:41:26.245621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.619 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.246029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.246038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.246438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.246448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.246900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.246910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.247381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.247391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.247757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.247767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.248139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.248149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.248552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.248561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.248933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.248942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.249347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.249357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.249725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.249736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.250120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.250134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.250504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.250514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.250925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.250935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.251419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.251456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.251822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.251833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.252221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.252232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.252635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.252644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.253049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.253058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.253431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.253441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.253851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.253860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.254219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.254229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.254626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.254637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.255027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.255040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.255412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.255422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.255797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.255806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.256175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.256185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.256578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.256588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.256985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.256995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.257382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.257392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.257765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.257775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.258231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.258241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.258595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.258605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.259010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.259019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.259415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.259424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.259825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.259834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.260236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.260246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.260614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.260623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.260990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.260999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.261361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.261371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.261724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.261733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.262136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.620 [2024-07-15 11:41:26.262145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.620 qpair failed and we were unable to recover it. 00:29:57.620 [2024-07-15 11:41:26.262530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.262539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.262928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.262938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.263351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.263361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.263731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.263741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.263834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.263847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.264229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.264239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.264642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.264652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.265072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.265082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.265282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.265295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.265680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.265690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.266064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.266073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.266350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.266361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.266772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.266782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.267169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.267179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.267545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.267555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.267988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.267997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.268369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.268378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.268777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.268787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.269146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.269156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.269525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.269534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.269896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.269905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.270291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.270302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.270577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.270587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.270955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.270964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.271329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.271339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.271720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.271730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.272114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.272128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.272484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.272493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.272906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.272916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.273427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.273464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.273881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.273893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.274279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.274290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.274662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.274671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.275070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.275079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.275456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.621 [2024-07-15 11:41:26.275467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.621 qpair failed and we were unable to recover it. 00:29:57.621 [2024-07-15 11:41:26.275851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.275860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.276285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.276295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.276655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.276664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.277034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.277043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.277425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.277437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.277828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.277838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.278286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.278296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.278653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.278662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.279026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.279035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.279418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.279428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.279799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.279809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.280093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.280103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.280476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.280486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.280753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.280763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.281019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.281035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.281312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.281322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.281722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.281732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.282140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.282150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.282546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.282555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.282916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.282925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.283304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.283314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.283694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.283704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.284016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.284025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.284412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.284422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.284807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.284817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.285221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.285632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.285641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.286004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.286013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.286416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.286426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.286826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.286835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.287228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.287237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.287692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.287701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.288092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.288102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.288510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.288519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.288811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.288820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.289213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.289222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.289595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.289604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.290017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.290026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.290518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.290528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.290886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.290895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.291279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.622 [2024-07-15 11:41:26.291709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.622 [2024-07-15 11:41:26.291720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.622 qpair failed and we were unable to recover it. 00:29:57.623 [2024-07-15 11:41:26.292079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.623 [2024-07-15 11:41:26.292088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.623 qpair failed and we were unable to recover it. 00:29:57.623 [2024-07-15 11:41:26.292573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.623 [2024-07-15 11:41:26.292583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.623 qpair failed and we were unable to recover it. 00:29:57.623 [2024-07-15 11:41:26.292957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.623 [2024-07-15 11:41:26.292966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.623 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.293464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.293502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.293923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.293934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.294447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.294484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.294903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.294914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.295349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.295386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.295849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.295861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.296386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.296423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.296835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.296847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.297250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.297261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.297616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.297626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.298013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.298022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.298412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.298422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.298790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.298800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.299187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.299197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.299420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.299433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.299799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.299808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.300212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.300221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.300611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.300621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.301050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.894 [2024-07-15 11:41:26.301059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.894 qpair failed and we were unable to recover it. 00:29:57.894 [2024-07-15 11:41:26.301418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.301428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.301816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.301825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.302185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.302194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.302568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.302578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.302793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.302803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.303191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.303202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.303589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.303599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.303984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.303994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.304401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.304411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.304790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.304799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.305159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.305168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.305538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.305547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.305969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.305978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.306342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.306351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.306733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.306743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.307200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.307210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.307630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.307639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.308004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.308013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.308300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.308312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.308704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.308713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.309072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.309081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.309459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.309468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.309833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.309851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.310240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.310250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.310633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.310642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.311022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.311418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.311430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.311813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.311823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.312213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.312222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.312550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.312559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.312942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.312951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.313309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.313318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.313738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.313747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.314174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.314184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.314550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.314559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.314961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.314970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.315375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.315385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.315792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.315801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.316162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.316172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.316541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.316551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.316933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.895 [2024-07-15 11:41:26.316942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.895 qpair failed and we were unable to recover it. 00:29:57.895 [2024-07-15 11:41:26.317310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.317320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.317603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.317612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.318019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.318029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.318426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.318436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.318837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.318846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.319207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.319216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.319599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.319609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.320039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.320049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.320336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.320346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.320793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.320803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.321189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.321199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.321559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.321568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.321983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.321992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.322353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.322362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.322757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.322766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.323067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.323076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.323455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.323465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.323842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.323851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.324212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.324221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.324570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.324579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.324938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.324947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.325155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.325168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.325574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.325583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.325984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.325993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.326359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.326368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.326770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.326779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.327170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.327180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.327574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.327583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.327983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.327992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.328381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.328390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.328749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.328758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.329131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.329141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.329543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.329552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.329920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.329929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.330402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.330439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.330857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.330868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.331355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.331392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.331812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.331823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.332226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.332236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.332594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.332604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.333004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.333014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.896 qpair failed and we were unable to recover it. 00:29:57.896 [2024-07-15 11:41:26.333416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.896 [2024-07-15 11:41:26.333425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.333829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.333838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.334200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.334618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.334628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.335002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.335015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.335422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.335432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.335796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.335806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.336222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.336232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.336634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.336643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.337002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.337011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.337410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.337419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.337833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.337842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.338116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.338130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.338507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.338517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.338904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.338914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.339324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.339334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.339710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.339720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.340080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.340089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.340492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.340914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.340923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.341399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.341436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.341758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.341770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.342155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.342540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.342549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.342919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.342928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.343357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.343368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.343741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.343750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.344116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.344130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.344356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.344370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.344768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.344777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.345183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.345193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.345566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.345575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.345974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.345984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.346267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.346277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.346678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.346688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.347093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.347103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.347493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.347503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.347770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.347779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.348181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.348190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.348632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.348640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.349005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.349014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.897 [2024-07-15 11:41:26.349377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.897 [2024-07-15 11:41:26.349387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.897 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.349805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.349814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.350223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.350233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.350432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.350442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.350744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.350753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.351136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.351146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.351547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.351556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.351923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.351932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.352366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.352375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.352776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.352786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.353192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.353201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.353411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.353421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.353777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.353787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.354188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.354199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.354608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.354617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.354993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.355002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.355279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.355289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.355686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.355695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.356061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.356070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.356443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.356453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.356745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.356755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.357134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.357144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.357507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.357516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.357857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.357866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.358163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.358173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.358559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.358568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.358934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.358943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.359345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.359355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.359715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.359724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.360094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.360103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.360473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.360483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.360849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.360861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.898 [2024-07-15 11:41:26.361246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.898 [2024-07-15 11:41:26.361255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.898 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.361638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.361647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.362062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.362071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.362465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.362474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.362858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.362867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.363232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.363242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.363661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.363670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.364067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.364077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.364469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.364478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.364770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.364780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.365158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.365168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.365534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.365543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.365852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.365862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.366255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.366264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.366672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.366681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.367056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.367065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.367537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.367547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.367903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.367913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.368252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.368262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.368550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.368559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.368944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.368953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.369349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.369359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.369763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.369772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.370051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.370061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.370448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.370458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.370823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.370832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.371198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.371208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.371596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.371606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.371990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.371999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.372404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.372413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.372777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.372786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.373155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.373164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.373539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.373548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.373940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.373950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.374356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.374365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.374633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.374642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.375035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.375044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.375419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.375429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.375792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.375801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.376164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.376173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.376549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.376560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.899 [2024-07-15 11:41:26.376952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.899 [2024-07-15 11:41:26.376961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.899 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.377339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.377349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.377710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.377719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.378079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.378088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.378498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.378507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.378872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.378881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.379097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.379108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.379489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.379498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.379900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.379909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.380405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.380443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.380839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.380851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.381239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.381250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.381618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.381627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.382006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.382015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.382466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.382475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.382837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.382846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.383253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.383262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.383602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.383611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.384012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.384022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.384430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.384825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.384835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.385234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.385243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.385451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.385464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.385844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.385853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.386214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.386223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.386635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.386644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.387065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.387077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.387468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.387477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.387962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.387972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.388374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.388384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.388768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.388778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.389159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.389169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.389539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.389548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.389922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.389931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.390300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.390309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.390697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.390707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.391003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.391012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.391372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.391381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.391784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.391793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.392157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.392166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.392550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.392560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.392939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.900 [2024-07-15 11:41:26.392949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.900 qpair failed and we were unable to recover it. 00:29:57.900 [2024-07-15 11:41:26.393333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.393343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.393746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.393755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.394154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.394163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.394568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.394577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.394943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.394951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.395341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.395351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.395773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.395782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.396183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.396193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.396600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.396608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.396982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.396991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.397354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.397363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.397622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.397632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.398030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.398039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.398455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.398464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.398867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.398876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.399246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.399256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.399638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.399648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.400024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.400034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.400418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.400428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.400721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.400731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.401147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.401157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.401566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.401575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.401945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.401954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.402238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.402247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.402646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.402655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.403018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.403030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.403415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.403425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.403821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.403830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.404188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.404197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.404580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.404589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.404903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.404912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.405298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.405307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.405674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.405689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.406070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.406079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.406438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.406448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.406851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.406860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.407226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.407237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.407634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.407643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.408011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.408020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.408412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.408841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.408850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.901 [2024-07-15 11:41:26.409234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.901 [2024-07-15 11:41:26.409245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.901 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.409643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.409652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.410089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.410098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.410456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.410465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.410853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.410862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.411154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.411163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.411533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.411541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.411780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.411790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.412180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.412189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.412481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.412490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.412887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.412896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.413316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.413328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.413717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.413727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.414106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.414115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.414514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.414524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.414922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.414931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.415232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.415242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.415632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.415641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.415999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.416008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.416414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.416423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.416787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.416795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.417156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.417166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.417534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.417544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.417931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.417940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.418321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.418330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.418735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.418745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.419111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.419120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.419404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.419413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.419801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.419811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.420196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.420205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.420618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.420627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.421021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.421030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.421430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.421440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.421806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.421815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.422180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.422189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.422478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.422487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.422871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.422880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.423243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.423252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.423627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.423636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.424026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.424035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.424427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.424436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.424798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.902 [2024-07-15 11:41:26.424807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.902 qpair failed and we were unable to recover it. 00:29:57.902 [2024-07-15 11:41:26.425214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.425224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.425613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.425622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.426021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.426030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.426426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.426436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.426798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.426808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.427194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.427203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.427600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.427609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.428019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.428027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.428408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.428418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.428806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.428815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.429171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.429185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.429603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.429612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.430029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.430038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.430446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.430456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.430832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.430842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.431218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.431228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.431593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.431602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.431985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.431994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.432395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.432405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.432767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.432776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.433179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.433189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.433580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.433589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.433971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.433980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.434376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.434385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.434750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.434759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.435119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.435131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.435332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.435344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.435628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.435637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.436031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.436040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.436404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.436413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.436708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.436717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.437075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.437084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.437438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.437449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.437834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.437842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.438203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.903 [2024-07-15 11:41:26.438212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.903 qpair failed and we were unable to recover it. 00:29:57.903 [2024-07-15 11:41:26.438633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.438642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.439050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.439060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.439447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.439459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.439818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.439827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.440186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.440196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.440572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.440581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.440947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.440956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.441329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.441338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.441722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.441731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.442113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.442125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.442526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.442535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.442793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.442803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.443187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.443196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.443557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.443566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.443927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.443936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.444220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.444229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.444647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.444656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.445021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.445031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.445423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.445432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.445829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.445839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.446221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.446230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.446632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.446642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.447003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.447012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.447373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.447382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.447770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.447780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.448176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.448185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.448480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.448490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.448881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.448890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.449248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.449258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.449655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.449664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.450029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.450039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.450422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.450431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.450789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.450798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.451109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.451119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.451513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.451522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.451960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.451970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.452252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.452261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.452686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.452695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.452935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.452945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.453332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.453342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.453702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.453710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.904 qpair failed and we were unable to recover it. 00:29:57.904 [2024-07-15 11:41:26.454076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.904 [2024-07-15 11:41:26.454085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.454524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.454534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.454901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.454912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.455404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.455441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.455859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.455870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.456371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.456408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.456764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.456775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.457143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.457155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.457567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.457576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.457937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.457946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.458310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.458320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.458731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.458740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.459153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.459162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.459420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.459428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.459902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.459912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.460265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.460274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.460667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.460677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.461040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.461049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.461418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.461428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.461831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.461840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.462249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.462259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.462646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.462662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.463041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.463050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.463433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.463443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.463852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.463861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.464082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.464095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.464436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.464446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.464832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.464842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.465244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.465254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.465535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.465545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.465928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.465937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.466298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.466308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.466718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.466727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.467112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.467126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.467506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.467515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.467912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.467921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.468341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.468378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.468770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.468781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.469190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.469200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.469609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.469618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.469980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.469990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.905 [2024-07-15 11:41:26.470396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.905 [2024-07-15 11:41:26.470405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.905 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.470750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.470759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.471213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.471224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.471585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.471595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.471988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.471997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.472518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.472555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.472972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.472984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.473449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.473485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.473832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.473844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.474334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.474371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.474793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.474804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.475338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.475375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.475793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.475805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.476175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.476185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.476477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.476486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.476849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.476858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.477244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.477255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.477649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.477659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.478040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.478049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.478446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.478455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.478854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.478863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.479267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.479277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.479724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.479734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.480138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.480148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.480534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.480543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.480921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.480930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.481344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.481354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.481737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.481747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.482131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.482141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.482508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.482519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.482810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.482820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.483219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.483228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.483594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.483603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.483960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.483968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.484382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.484392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.484598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.484611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.485028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.485037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.485428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.485437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.485842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.485852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.486224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.486234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.486645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.486655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.487027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.906 [2024-07-15 11:41:26.487037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.906 qpair failed and we were unable to recover it. 00:29:57.906 [2024-07-15 11:41:26.487418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.487427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.487860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.487870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.488245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.488255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.488661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.488670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.488965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.488975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.489333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.489343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.489723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.489733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.490117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.490130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.490487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.490497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.490790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.490799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.491075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.491092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.491391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.491400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.491801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.491810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.492178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.492188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.492590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.492599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.492989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.492999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.493490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.493500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.493935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.493944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.494410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.494448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.494837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.494848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.495342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.495378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.495801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.495812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.496081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.496090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.496480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.496490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.496850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.496859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.497364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.497400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.497826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.497838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.498207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.498217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.498625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.498638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.499005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.499015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.499330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.499340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.499735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.499744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.499961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.499975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.500341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.500352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.907 [2024-07-15 11:41:26.500712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.907 [2024-07-15 11:41:26.500721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.907 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.501081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.501090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.501481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.501492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.501769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.501778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.502138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.502148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.502509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.502518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.502879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.502889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.503272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.503282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.503668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.503678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.504063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.504072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.504474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.504484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.504848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.504857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.505144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.505154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.505374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.505385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.505761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.505771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.506182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.506192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.506578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.506587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.506954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.506963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.507362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.507371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.507734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.507744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.508139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.508150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.508574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.508589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.508981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.508990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.509483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.509519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.509940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.509951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.510362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.510398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.510798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.510810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.511313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.511350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.511659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.511671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.512061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.512071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.512433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.512444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.512858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.512868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.513253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.513264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.513666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.513674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.514035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.514044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.514422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.514432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.514833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.514843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.515139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.515148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.515549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.515558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.515842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.515852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.516237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.516247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.516526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.908 [2024-07-15 11:41:26.516535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.908 qpair failed and we were unable to recover it. 00:29:57.908 [2024-07-15 11:41:26.516837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.516846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.517114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.517137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.517522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.517531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.517924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.517934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.518316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.518326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.518698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.518707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.519109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.519118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.519485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.519497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.519855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.519865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.520269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.520278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.520566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.520575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.520961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.520970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.521373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.521383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.521785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.521795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.522176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.522185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.522574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.522584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.522970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.522980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3738351 Killed "${NVMF_APP[@]}" "$@" 00:29:57.909 [2024-07-15 11:41:26.523388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.523399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.523760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.523769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:57.909 [2024-07-15 11:41:26.524154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.524168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.909 [2024-07-15 11:41:26.524569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.524580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.909 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.909 [2024-07-15 11:41:26.524946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.524957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.909 [2024-07-15 11:41:26.525233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.525243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.525626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.525636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.526012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.526021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.526459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.526469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.526826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.526835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.527085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.527096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.527484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.527495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.527898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.527908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.528212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.528222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.528618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.528631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.528925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.528935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.529303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.529313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.529718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.529727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.530111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.530126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.530330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.530341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.530732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.530741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.531131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.909 [2024-07-15 11:41:26.531142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-07-15 11:41:26.531536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.531545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.531940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.531950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3739380 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3739380 00:29:57.910 [2024-07-15 11:41:26.532451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.532489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3739380 ']' 00:29:57.910 [2024-07-15 11:41:26.532884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.532898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.910 [2024-07-15 11:41:26.533390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.533430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.910 [2024-07-15 11:41:26.533831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.533845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 11:41:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.910 [2024-07-15 11:41:26.534273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.534285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.534748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.534758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.535136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.535148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.535559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.535569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.535975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.535988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.536357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.536367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.536751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.537144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.537154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.537589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.537601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.538002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.538012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.538383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.538392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.538775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.538785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.539185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.539195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.539567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.539577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.539954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.539963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.540338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.540348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.540752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.540762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.540980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.540993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.541360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.541372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.541755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.541765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.542131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.542142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.542537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.542546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.542933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.542942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.543400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.543437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.543855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.543867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.544385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.544423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.544730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.544742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.545141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.545152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.545584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.545594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.545962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.545972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.546360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.546373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-07-15 11:41:26.546735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.910 [2024-07-15 11:41:26.546745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.547117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.547132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.547560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.547570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.547961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.547970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.548479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.548515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.548931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.548947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.549425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.549462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.549798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.549812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.550198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.550211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.550586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.550596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.551003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.551012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.551434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.551444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.551800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.551809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.552231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.552625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.552636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.553024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.553034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.553408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.553418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.553822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.553833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.554222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.554232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.554652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.554662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.554912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.554923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.555351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.555361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.555721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.555730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.556113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.556129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.556537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.556546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.556950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.556960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.557453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.557490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.557923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.557935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.558386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.558423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.558800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.558813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.559175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.559186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.559611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.559622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.560015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.560030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.560399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.560410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.560796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.560805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.911 qpair failed and we were unable to recover it. 00:29:57.911 [2024-07-15 11:41:26.561169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.911 [2024-07-15 11:41:26.561179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.561577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.561586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.562037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.562046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.562402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.562411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.562832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.562841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.563231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.563248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.563668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.563677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.563896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.563910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.564300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.564310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.564693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.564702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.564970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.564979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.565350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.565360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.565729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.565738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.566102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.566111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.566502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.566513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.566899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.566908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.567376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.567413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.567804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.567815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.568189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.568200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.568587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.568597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.568989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.568999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.569364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.569375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.569747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.569757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.570142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.570152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.570582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.570591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.570960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.570969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.571456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.571465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.571740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.571750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.572134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.572144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.572544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.572553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.572919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.572928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.573293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.573304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.573690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.573699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.574082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.574091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.574457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.574466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.574818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.574835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.575226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.575236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.575665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.575675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.576098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.576109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.576471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.576480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.576855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.576865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.912 [2024-07-15 11:41:26.577150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.912 [2024-07-15 11:41:26.577160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.912 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.577547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.577557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.577962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.577971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.578375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.578384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.578785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.578794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.579076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.579090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.579376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.579386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.579778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.579787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.580188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.580198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.580640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.580649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.581011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.581021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.581411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.581422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.581818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.581829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.582082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.582093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.582453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.582463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.582852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.582861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.583221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.583231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.583609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.583619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.583983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.583994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.584391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.584400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.584804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.584813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:57.913 [2024-07-15 11:41:26.585178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.913 [2024-07-15 11:41:26.585188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:57.913 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.585573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.585584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.585970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.585980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.586384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.586400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.586772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.586781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.586921] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:29:58.208 [2024-07-15 11:41:26.586971] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.208 [2024-07-15 11:41:26.587147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.587158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.587531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.587539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.587921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.587932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.588336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.588345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.588707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.588717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.589100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.589110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.589553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.589564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.589938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.589948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.590362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.590400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.590816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.590828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.591254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.591265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.591632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.591643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.591893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.591904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.592205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.592215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.592508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.592518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.592915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.592925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.593306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.593319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.593595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.593605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.593891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.593901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.594299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.594309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.594697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.594707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.595129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.208 [2024-07-15 11:41:26.595139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.208 qpair failed and we were unable to recover it. 00:29:58.208 [2024-07-15 11:41:26.595515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.595525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.595933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.595943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.596423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.596465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.596860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.596872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.597277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.597288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.597700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.597709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.598074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.598084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.598545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.598556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.598915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.598924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.599451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.599488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.599905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.599917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.600424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.600461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.600893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.600907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.601412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.601449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.601767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.601780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.602167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.602178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.602543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.602553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.602963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.602973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.603483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.603493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.603858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.603867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.604365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.604401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.604777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.604789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.605157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.605168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.605574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.605584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.605973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.605982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.606348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.606360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.606737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.606747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.607051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.607060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.607437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.607447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.607849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.607859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.608247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.608257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.608643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.608652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.609041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.609050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.609432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.609441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.609800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.609809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.610197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.610208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.610612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.610621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.610903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.610919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.611303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.611313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.611677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.611687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.612059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.209 [2024-07-15 11:41:26.612068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.209 qpair failed and we were unable to recover it. 00:29:58.209 [2024-07-15 11:41:26.612447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.612457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.612838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.612848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.613208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.613221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.613625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.613634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.614037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.614047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.614432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.614441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.614848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.614856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.615252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.615261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.615631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.615641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.616030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.616039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.616417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.616426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.616790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.616799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.617200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.617209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.617487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.617505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.617891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.617900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.618190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.618578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.618588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.618970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.618979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.210 [2024-07-15 11:41:26.619410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.619420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.619798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.619807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.620204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.620213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.620597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.620608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.620992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.621001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.621371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.621381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.621766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.621778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.622237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.622248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.622622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.622631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.622987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.622996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.623435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.623444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.623804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.623815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.624184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.624194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.624568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.624577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.624941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.624950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.625310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.625319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.625709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.625719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.626103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.626113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.626481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.626491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.626877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.626886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.627309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.627319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.627735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.627744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.628110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-15 11:41:26.628120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.210 qpair failed and we were unable to recover it. 00:29:58.210 [2024-07-15 11:41:26.628543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.628552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.628828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.628846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.629356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.629393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.629803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.629815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.630217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.630227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.630607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.630617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.630991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.631000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.631274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.631691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.631700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.632084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.632094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.632471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.632481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.632878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.632887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.633182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.633192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.633584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.633595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.633890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.633899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.634263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.634273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.634528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.634537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.634790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.634799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.635183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.635193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.635482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.635491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.635861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.635870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.636271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.636280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.636688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.636697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.637132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.637142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.637533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.637542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.637910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.637920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.638309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.638320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.638765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.638774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.639158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.639168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.639547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.639557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.639960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.639969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.640250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.640260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.640658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.640668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.641042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.641051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.641463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.641473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.641857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.641866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.642242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.642252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.642732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.642742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.643110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.643120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.643517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.643527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.643917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.211 [2024-07-15 11:41:26.643927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.211 qpair failed and we were unable to recover it. 00:29:58.211 [2024-07-15 11:41:26.644411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.644865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.644878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.645308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.645320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.645635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.645645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.645962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.645972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.646373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.646383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.646769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.646780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.647164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.647174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.647543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.647553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.648012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.648021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.648398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.648408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.648686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.648695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.649084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.649094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.649457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.649467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.649872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.649881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.650241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.650253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.650560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.650570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.650929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.650938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.651311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.651321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.651690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.651700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.652087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.652097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.652474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.652484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.652891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.652901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.653289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.653298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.653706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.653715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.654116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.654130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.654383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.654393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.654821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.654830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.655190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.655200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.655479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.655488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.655763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.655772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.656221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.212 [2024-07-15 11:41:26.656231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.212 qpair failed and we were unable to recover it. 00:29:58.212 [2024-07-15 11:41:26.656599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.656608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.656982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.656992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.657358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.657368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.657758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.657767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.658168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.658178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.658554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.658563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.658935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.658945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.659327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.659336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.659782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.659791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.660155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.660164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.660569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.660579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.660964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.660973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.661381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.661390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.661778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.661788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.662157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.662167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.662555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.662564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.662955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.662965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.663367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.663377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.663783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.663793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.664150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.664160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.664440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.664449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.664835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.664845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.665213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.665223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.665603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.665612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.666048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.666058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.666472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.666482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.666880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.666890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.667282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.667293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.667530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.667540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.667915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.667924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.668141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.668154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.668475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.668486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.668879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.668889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.669277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.669286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.669450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.213 [2024-07-15 11:41:26.669718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.669727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.669986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.669995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.670368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.670378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.670742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.670754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.671159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.671169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.671553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.671563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.671992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.213 [2024-07-15 11:41:26.672002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.213 qpair failed and we were unable to recover it. 00:29:58.213 [2024-07-15 11:41:26.672387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.672397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.672675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.672685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.673002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.673011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.673413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.673423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.673820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.673830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.674125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.674136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.674549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.674558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.674964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.674973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.675500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.675537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.675953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.675965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.676497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.676534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.676945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.676956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.677456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.677493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.677916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.677928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.678436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.678472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.678714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.678726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.679130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.679140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.679519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.679528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.679919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.679929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.680435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.680472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.680847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.680858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.681334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.681371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.681809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.681822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.682214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.682225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.682601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.682611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.682906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.682915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.683304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.683315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.683726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.683735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.684014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.684024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.684413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.684424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.684787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.684796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.685154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.685164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.685571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.685580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.685982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.685992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.686396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.686405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.686768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.686777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.687178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.687188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.687590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.687602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.687824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.687836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.688227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.688237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.688618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.214 [2024-07-15 11:41:26.688627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.214 qpair failed and we were unable to recover it. 00:29:58.214 [2024-07-15 11:41:26.689073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.689082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.689471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.689480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.689882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.689892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.690264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.690273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.690682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.690690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.691090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.691099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.691563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.691572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.691951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.691960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.692325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.692362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.692757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.692769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.693163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.693174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.693562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.693571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.693945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.693954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.694326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.694336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.694759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.694768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.695125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.695135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.695343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.695355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.695717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.696188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.696198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.696581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.696590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.696994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.697003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.697368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.697378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.697767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.697776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.698176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.698188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.698614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.698623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.698982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.698991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.699367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.699377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.699771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.699781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.700187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.700197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.700587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.700597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.701057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.701066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.701331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.701341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.701755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.701764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.702134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.702143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.702515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.702524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.702809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.702818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.703077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.703087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.703472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.703482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.703760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.703769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.704044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.704053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.704439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.704449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.215 [2024-07-15 11:41:26.704843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-07-15 11:41:26.704852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.215 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.705213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.705223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.705583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.705594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.705996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.706006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.706282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.706291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.706682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.706692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.707079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.707089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.707494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.707503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.707868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.707878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.708259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.708269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.708654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.708663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.709045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.709055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.709452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.709462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.709865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.709874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.710241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.710250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.710633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.710642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.711040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.711049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.711335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.711345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.711779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.711788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.712075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.712084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.712474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.712484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.712848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.712857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.713269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.713278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.713681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.713692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.713987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.713997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.714377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.714386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.714773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.714782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.715195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.715205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.715603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.715613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.715999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.716008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.716378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.716389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.716840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.716850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.717213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.717223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.717640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.717649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.718035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.718045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.718421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.718430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.718829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.216 [2024-07-15 11:41:26.718838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.216 qpair failed and we were unable to recover it. 00:29:58.216 [2024-07-15 11:41:26.719216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.719226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.719653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.719663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.720042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.720053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.720482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.720492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.720898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.720908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.721216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.721226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.721619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.721628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.722032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.722041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.722416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.722426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.722776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.722785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.723186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.723196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.723652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.723661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.724026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.724035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.724434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.724444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.724864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.724874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.725265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.725275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.725660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.725669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.726059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.726068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.726439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.726449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.726832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.726841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.727240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.727251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.727689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.727699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.728109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.728119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.728526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.728536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.728807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.728816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.729198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.729208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.729630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.729639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.730006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.730016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.730419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.730430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.730816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.730825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.731246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.731256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.731640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.731649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.732043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.732052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.732400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.732410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.732820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.732831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.733220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.733230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.733606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.217 [2024-07-15 11:41:26.733616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.217 qpair failed and we were unable to recover it. 00:29:58.217 [2024-07-15 11:41:26.733950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.217 [2024-07-15 11:41:26.733979] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.217 [2024-07-15 11:41:26.733986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.217 [2024-07-15 11:41:26.733993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.217 [2024-07-15 11:41:26.733998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.218 [2024-07-15 11:41:26.734000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.734009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.734203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:58.218 [2024-07-15 11:41:26.734378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.734390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.734460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:58.218 [2024-07-15 11:41:26.734600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:58.218 [2024-07-15 11:41:26.734676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.734685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.734601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:58.218 [2024-07-15 11:41:26.734982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.734992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.735383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.735393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.735673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.735690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.736111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.736125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.736511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.736521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.736880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.736892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.737306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.737316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.737724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.737734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.738129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.738139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.738274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.738285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.738751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.738760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.739037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.739054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.739532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.739542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.739931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.739940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.740348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.740357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.740783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.740792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.741197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.741208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.741580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.741589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.741881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.741890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.742284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.742293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.742658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.742667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.743053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.743063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.743344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.743354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.743742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.743752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.744142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.744153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.744548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.744558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.744857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.744867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.745256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.745266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.745581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.745590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.745974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.745984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.746357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.746366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.746759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.746768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.747073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.747082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.747459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.747468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.747851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.747861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.748097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.748108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.218 [2024-07-15 11:41:26.748478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.218 [2024-07-15 11:41:26.748488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.218 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.748734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.748745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.749040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.749052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.749427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.749438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.749830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.749840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.750147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.750157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.750457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.750467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.750856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.750865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.751257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.751267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.751534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.751543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.751742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.751755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.752114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.752127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.752513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.752523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.752777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.752786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.753042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.753052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.753443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.753453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.753863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.753872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.754156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.754167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.754561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.754572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.754964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.754974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.755213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.755223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.755538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.755554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.755975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.755985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.756354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.756363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.756755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.756765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.757154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.757164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.757550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.757559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.757790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.757799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.758189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.758199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.758578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.758587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.758933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.758943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.759346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.759356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.759742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.759752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.760159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.760169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.760451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.760466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.760862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.760872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.761154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.761164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.761524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.761943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.761952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.762344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.762353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.762543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.762553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.219 [2024-07-15 11:41:26.762916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.219 [2024-07-15 11:41:26.762925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.219 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.763352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.763362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.763633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.763643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.764032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.764041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.764450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.764459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.764863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.764873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.765240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.765251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.765489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.765498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.765887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.765897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.766139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.766150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.766376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.766385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.766797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.766806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.767091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.767100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.767478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.767488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.767716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.767725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.767991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.768000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.768419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.768429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.768656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.768665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.768911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.768920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.769297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.769307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.769705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.769714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.770085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.770094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.770504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.770514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.770918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.771204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.771213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.771612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.771621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.771861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.771871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.772130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.772139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.772556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.772566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.772971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.772983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.773469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.773509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.773818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.773830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.774056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.774065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.774465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.774475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.774852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.774861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.775076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.775085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.775367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.775589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.775598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.775985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.775995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.776361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.776371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.776768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.776777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.777172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.220 [2024-07-15 11:41:26.777183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.220 qpair failed and we were unable to recover it. 00:29:58.220 [2024-07-15 11:41:26.777571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.777580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.777779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.777788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.778196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.778205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.778582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.778591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.778963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.778973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.779358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.779368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.779758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.779767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.780183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.780192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.780468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.780476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.780878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.780887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.781288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.781298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.781657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.781666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.781745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.781753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.782131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.782140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.782516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.782525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.782779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.782791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.783074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.783084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.783306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.783315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.783719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.783729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.784117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.784131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.784526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.784535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.784745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.784754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.785115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.785129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.785507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.785516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.785739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.785748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.786143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.786153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.786550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.786560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.786968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.786977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.787348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.787360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.787749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.787758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.788168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.788177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.788391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.788400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.788785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.788794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-07-15 11:41:26.789161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.221 [2024-07-15 11:41:26.789171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.789580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.789589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.789874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.789891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.790265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.790275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.790657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.790666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.790942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.790951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.791269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.791279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.791680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.791689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.792086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.792096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.792494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.792504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.792749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.792758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.793019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.793029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.793418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.793428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.793746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.793755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.794150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.794159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.794595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.794604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.794971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.794980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.795408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.795417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.795832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.795841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.796226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.796235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.796636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.796645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.797022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.797031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.797267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.797278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.797669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.797679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.798132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.798142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.798587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.798596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.798984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.798993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.799514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.799552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.799966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.799977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.800406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.800443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.800822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.800834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.801001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.801010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.801397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.801406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.801698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.801707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.802095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.802104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.802382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.802392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.802769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.802779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.803143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.803154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.803575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.803584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.803828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.803837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.804251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.804261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.804545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.222 [2024-07-15 11:41:26.804554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-07-15 11:41:26.804964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.804973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.805337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.805346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.805725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.805735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.806136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.806147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.806585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.806595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.807002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.807011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.807360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.807369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.807735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.807744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.807983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.807992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.808199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.808209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.808645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.808654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.809019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.809029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.809436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.809446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.809834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.809844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.810291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.810300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.810698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.810707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.810969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.810978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.811367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.811376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.811752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.811761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.811989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.811998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.812301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.812310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.812521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.812537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.812952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.812961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.813193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.813203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.813553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.813562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.813923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.813932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.814186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.814197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.814584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.814593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.815008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.815017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.815270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.815279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.815685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.815694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.816067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.816076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.816290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.816299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.816704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.816713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.817082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.817091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.817491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.817501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.817866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.817875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.818077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.818086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.818483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.818492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.818858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.818866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.819162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-07-15 11:41:26.819457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.223 [2024-07-15 11:41:26.819466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.819839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.819848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.820221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.820231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.820626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.820635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.820997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.821006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.821420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.821430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.821747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.821756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.822141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.822153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.822583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.822592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.822790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.822798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.823075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.823084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.823283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.823293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.823706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.823715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.824143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.824153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.824551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.824560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.824958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.824967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.825382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.825391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.825821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.825830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.826219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.826229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.826635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.826644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.827034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.827043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.827424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.827435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.827830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.827840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.828231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.828240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.828520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.828529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.828938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.829156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.829167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.829538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.829547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.829715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.829724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.830135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.830145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.830540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.830549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.830764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.830772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.830969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.830978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.831253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.831262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.831559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.831568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.831848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.831857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.832234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.832243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.832632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.832641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.833033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.833042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.833264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.833274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.833480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.833491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.833610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.224 [2024-07-15 11:41:26.833619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.224 qpair failed and we were unable to recover it. 00:29:58.224 [2024-07-15 11:41:26.833962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.833972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.834265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.834274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.834497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.834506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.834824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.834833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.835212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.835222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.835614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.835624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.836013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.836025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.836238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.836249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.836653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.836662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.837030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.837039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.837242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.837251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.837664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.837673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.838041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.838050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.838342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.838352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.838718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.838728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.839092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.839101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.839469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.839479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.839861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.839870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.840238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.840248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.840661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.840670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.841107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.841534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.841543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.841741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.841750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.842109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.842119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.842521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.842530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.842935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.842945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.843333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.843343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.843760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.843769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.844147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.844156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.844562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.844571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.844969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.844978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.845371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.845382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.845854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.845863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.846059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.225 [2024-07-15 11:41:26.846070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.225 qpair failed and we were unable to recover it. 00:29:58.225 [2024-07-15 11:41:26.846279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.846289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.846556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.846566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.846952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.846961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.847040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.847050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.847406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.847416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.847821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.847830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.848049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.848058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.848493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.848503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.848898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.848907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.849371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.849380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.849748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.849758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.850169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.850178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.850541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.850550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.850772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.850781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.851095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.851104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.851314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.851323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.851672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.851681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.852126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.852135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.852503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.852512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.852895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.852904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.853100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.853109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.853361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.853371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.853692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.853701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.854091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.854100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.854469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.854479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.854734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.854743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.855139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.855149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.855594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.855603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.855970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.855979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.856369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.856379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.856766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.856776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.857192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.857201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.857622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.857631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.857995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.858004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.858251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.858261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.858633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.858642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.859050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.859059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.859450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.859459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.859666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.859675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.860070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.860079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.860359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.860371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.226 qpair failed and we were unable to recover it. 00:29:58.226 [2024-07-15 11:41:26.860553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.226 [2024-07-15 11:41:26.860562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.860972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.860982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.861388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.861397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.861838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.861847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.862040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.862049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.862258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.862268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.862650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.862659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.863026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.863035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.863440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.863449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.863815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.863824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.864284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.864294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.864487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.864496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.864886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.864895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.865256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.865266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.865756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.865765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.866168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.866179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.866570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.866579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.866949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.866958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.867328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.867338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.867607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.867616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.868006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.868015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.868258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.868267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.868701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.868709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.869081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.869090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.869333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.869342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.869772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.869781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.870149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.870159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.870442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.870452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.870721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.870730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.871184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.871193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.871448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.871457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.871657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.871666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.872075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.872084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.872479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.872489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.872894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.872903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.873267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.873276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.873361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.873369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.873745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.873754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.874140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.874150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.874665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.874673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.875085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.875094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.875385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.227 [2024-07-15 11:41:26.875395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.227 qpair failed and we were unable to recover it. 00:29:58.227 [2024-07-15 11:41:26.875782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.228 [2024-07-15 11:41:26.875791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.228 qpair failed and we were unable to recover it. 00:29:58.228 [2024-07-15 11:41:26.875878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.228 [2024-07-15 11:41:26.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.228 qpair failed and we were unable to recover it. 00:29:58.228 [2024-07-15 11:41:26.876239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.228 [2024-07-15 11:41:26.876248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.228 qpair failed and we were unable to recover it. 00:29:58.228 [2024-07-15 11:41:26.876643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.228 [2024-07-15 11:41:26.876652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.228 qpair failed and we were unable to recover it. 00:29:58.228 [2024-07-15 11:41:26.876968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.228 [2024-07-15 11:41:26.876977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.228 qpair failed and we were unable to recover it. 00:29:58.228 [2024-07-15 11:41:26.877364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.228 [2024-07-15 11:41:26.877373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.228 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.877792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.877802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.878198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.878208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.878331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.878343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.878541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.878550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.878982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.878990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.879292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.879301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.879553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.879562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.879910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.879919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.880338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.880347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.880797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.880806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.881097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.881107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.881551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.881560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.529 [2024-07-15 11:41:26.882008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.529 [2024-07-15 11:41:26.882018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.529 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.882418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.882428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.882820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.882829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.883216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.883226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.883655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.883665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.883919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.883928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.884324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.884334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.884633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.884645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.884936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.884945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.885328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.885337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.885525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.885533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.885806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.885815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.886137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.886147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.886397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.886405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.886809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.886818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.887019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.887028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.887453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.887462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.887819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.887828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.888080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.888090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.888473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.888483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.888853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.888862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.889234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.889244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.889665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.889675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.890065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.890075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.890494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.890503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.890909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.890919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.891130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.891140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.891527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.891536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.891732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.891741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.892097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.892106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.892438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.892447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.892829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.892838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.893213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.893222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.893616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.893625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.894015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.894024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.894467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.894477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.894888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.894897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.895149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.895158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.895550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.895559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.895968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.895977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.896396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.896405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.530 qpair failed and we were unable to recover it. 00:29:58.530 [2024-07-15 11:41:26.896654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.530 [2024-07-15 11:41:26.896663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.896887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.896897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.897186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.897196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.897605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.897614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.898041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.898051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.898438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.898448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.898820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.898829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.899206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.899216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.899624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.899633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.900025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.900034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.900419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.900428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.900638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.900647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.901056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.901066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.901346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.901355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.901760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.901770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.902063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.902073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.902510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.902521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.902908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.902917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.903215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.903224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.903649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.903658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.903881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.903890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.904140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.904150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.904400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.904408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.904786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.904795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.905188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.905198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.905372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.905382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.905786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.905795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.906203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.906213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.906509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.906527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.906919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.906928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.907232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.907242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.907634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.907642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.907839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.907848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.908248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.908257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.908612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.908624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.909020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.909029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.909449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.909458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.909664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.909673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.909865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.909874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.910102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.910111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.910498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.910508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.531 qpair failed and we were unable to recover it. 00:29:58.531 [2024-07-15 11:41:26.910912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.531 [2024-07-15 11:41:26.910921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.911264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.911274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.911659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.911668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.912074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.912084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.912461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.912471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.912668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.912677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.913019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.913028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.913312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.913322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.913718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.913727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.913981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.913991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.914380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.914390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.914809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.914819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.915028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.915038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.915443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.915453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.915863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.915873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.916261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.916270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.916543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.916552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.916962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.916971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.917380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.917390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.917777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.917786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.918069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.918079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.918457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.918466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.918832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.918842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.919140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.919152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.919276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.919285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.919717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.919726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.920094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.920103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.920317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.920326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.920730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.920739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.921133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.921143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.921373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.921381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.921620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.921629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.921887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.921896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.922340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.922350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.922729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.922741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.923131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.923142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.923540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.923549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.923919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.923927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.924297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.924306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.924711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.924721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.924842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.924853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.532 [2024-07-15 11:41:26.925171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.532 [2024-07-15 11:41:26.925181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.532 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.925579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.925588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.926010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.926018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.926325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.926334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.926767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.926775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.927156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.927165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.927552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.927561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.927989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.927998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.928417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.928426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.928914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.928923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.929321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.929331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.929544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.929553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.929896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.929905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.930222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.930232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.930631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.930640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.931035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.931044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.931442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.931452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.931739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.931748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.932045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.932054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.932424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.932434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.932679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.932690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.933083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.933092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.933527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.933536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.933901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.933910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.934293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.934303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.934685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.934694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.935104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.935113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.935500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.935512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.935716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.935725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.936090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.936099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.936470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.936480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.936922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.936931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.937243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.937252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.937653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.937662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.938068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.938078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.938450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.938460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.533 qpair failed and we were unable to recover it. 00:29:58.533 [2024-07-15 11:41:26.938850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.533 [2024-07-15 11:41:26.938859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.939320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.939330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.939559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.939569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.939761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.939771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.940043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.940052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.940451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.940460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.940826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.940835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.941134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.941144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.941394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.941403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.941620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.941629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.941828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.941837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.942196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.942206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.942488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.942497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.942863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.942872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.943242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.943251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.943662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.943672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.943932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.943941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.944340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.944349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.944712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.944720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.945200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.945210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.945621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.945630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.946034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.946043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.946432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.946442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.946644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.946653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.947071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.947080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.947349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.947360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.947771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.947781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.948181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.948192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.948585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.948594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.948888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.948897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.949309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.949318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.949515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.949524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.949922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.949930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.950297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.950307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.950517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.950526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.950863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.950872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.951071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.951080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.951449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.951458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.534 qpair failed and we were unable to recover it. 00:29:58.534 [2024-07-15 11:41:26.951661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.534 [2024-07-15 11:41:26.951670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.952079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.952088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.952330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.952340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.952711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.952720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.953090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.953099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.953310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.953319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.953471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.953480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.953874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.953883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.954291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.954301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.954554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.954564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.954953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.954962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.955258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.955267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.955462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.955471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.955864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.955873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.956290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.956302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.956667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.956676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.957051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.957060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.957472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.957482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.957868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.957877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.958284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.958293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.958517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.958526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.958918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.958927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.959134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.959143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.959364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.959752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.959761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.960131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.960140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.960603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.960612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.961032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.961041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.961423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.961433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.961839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.962209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.962219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.962635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.962644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.963023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.963032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.963327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.963336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.963729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.963738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.964076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.964086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.964527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.964536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.964801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.964810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.965210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.965219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.965589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.965598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.966021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.966030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.535 [2024-07-15 11:41:26.966294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.535 [2024-07-15 11:41:26.966304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.535 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.966721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.966730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.967097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.967107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.967510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.967521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.967909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.967918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.968202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.968212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.968587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.968596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.968794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.968803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.969202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.969211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.969607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.969616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.970014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.970026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.970425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.970435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.970798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.970807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.971200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.971210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.971619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.971631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.972044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.972054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.972344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.972354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.972611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.972620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.973018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.973027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.973400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.973410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.973799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.973808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.974005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.974014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.974485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.974496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.974961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.974971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.975355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.975365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.975777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.975789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.976198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.976208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.976400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.976409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.976679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.976690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.976882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.976891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.977092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.536 [2024-07-15 11:41:26.977101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bf220 with addr=10.0.0.2, port=4420 00:29:58.536 qpair failed and we were unable to recover it. 00:29:58.536 [2024-07-15 11:41:26.977321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ccf20 is same with the state(5) to be set 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Read completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.536 Write completed with error (sct=0, sc=8) 00:29:58.536 starting I/O failed 00:29:58.537 [2024-07-15 11:41:26.977702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.537 [2024-07-15 11:41:26.978164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.978176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.978667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.978696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.978797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.978805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.979069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.979076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.979445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.979455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.979882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.979890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.980368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.980396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.980788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.980797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.981359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.981387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.981772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.981780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.982007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.982013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.982520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.982547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.982931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.982940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.983331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.983340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.983632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.983640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.983998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.984005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.984386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.984398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.984794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.984801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.985190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.985199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.985489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.985497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.985907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.985914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.986207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.986214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.986592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.986599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.986892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.986898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.987137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.987144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.987578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.987584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.987955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.987961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.988335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.988354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.988600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.988606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.989056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.989062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.989263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.989273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.989669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.989675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.990044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.990051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.990506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.990513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.990920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.990926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.991327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.991335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.991723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.991729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.991797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.991804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.992155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.992162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.992520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.537 [2024-07-15 11:41:26.992527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.537 qpair failed and we were unable to recover it. 00:29:58.537 [2024-07-15 11:41:26.992936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.992942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.993319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.993326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.993715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.993722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.993974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.993980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.994370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.994377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.994791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.994797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.995003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.995009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.995410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.995416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.995795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.995801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.996183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.996191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.996580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.996586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.996996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.997002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.997453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.997831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.997837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.998113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.998119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.998534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.998541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.998910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.998921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.999119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.999128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.999494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.999522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:26.999909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:26.999917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.000409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.000436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.000612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.000621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.001080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.001087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.001303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.001310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.001706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.001713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.002083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.002089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.002553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.002560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.002757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.002764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.002987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.002994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.003365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.003371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.003767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.003773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.004145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.004151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.004355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.004361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.538 [2024-07-15 11:41:27.004781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.538 [2024-07-15 11:41:27.004787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.538 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.005195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.005202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.005580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.005586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.005956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.005963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.006338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.006345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.006546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.006552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.006931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.006938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.007433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.007440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.007685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.007692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.008068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.008074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.008495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.008502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.008909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.008916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.009289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.009295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.009692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.009699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.009907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.009914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.010295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.010303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.010690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.010696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.010984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.010992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.011379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.011386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.011584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.011590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.011834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.011840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.012254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.012261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.012655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.012661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.013068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.013076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.013530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.013537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.013734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.013740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.013937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.013944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.014306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.014313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.014700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.014707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.015128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.015134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.015618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.015624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.015876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.015882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.016292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.016298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.016677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.016684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.017065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.017072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.017490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.017496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.017869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.017876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.018176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.018183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.018375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.018381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.018620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.018626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.019015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.019022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.539 qpair failed and we were unable to recover it. 00:29:58.539 [2024-07-15 11:41:27.019377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.539 [2024-07-15 11:41:27.019383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.019748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.019754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.020045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.020052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.020442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.020448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.020890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.020896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.021202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.021209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.021597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.021883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.021889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.022282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.022289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.022656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.022662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.023056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.023063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.023430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.023437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.023780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.023786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.024199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.024205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.024457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.024464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.024724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.024731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.025116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.025125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.025533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.025539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.025948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.025954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.026368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.026375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.026762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.026768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.027008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.027015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.027308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.027317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.027693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.027700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.028070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.028077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.028284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.028291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.028716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.028722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.028919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.028926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.029323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.029329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.029588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.029595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.029979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.029986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.030382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.030388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.030580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.030588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.030789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.030796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.031159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.031166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.031560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.031567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.031979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.031986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.032237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.032243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.032653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.032659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.033031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.033037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.033423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.033430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.540 qpair failed and we were unable to recover it. 00:29:58.540 [2024-07-15 11:41:27.033493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.540 [2024-07-15 11:41:27.033499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.033906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.033913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.034318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.034325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.034688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.034695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.035087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.035093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.035491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.035499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.035884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.035890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.036304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.036310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.036685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.036693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.036948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.036954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.037144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.037151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.037542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.037549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.037916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.037923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.038325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.038353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.038557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.038565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.038953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.038967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.039271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.039279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.039573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.039580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.039960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.039967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.040349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.040356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.040563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.040570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.040946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.040956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.041348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.041355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.041418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.041424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.041771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.041778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.042086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.042093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.042512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.042518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.042889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.042895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.043267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.043274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.043702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.043708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.044085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.044092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.044479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.044486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.044897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.044905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.045342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.045369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.045576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.045583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.045939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.045946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.046372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.046379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.046748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.046755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.046960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.046966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.047372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.047379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.047624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.047631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.541 qpair failed and we were unable to recover it. 00:29:58.541 [2024-07-15 11:41:27.048107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.541 [2024-07-15 11:41:27.048114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.048517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.048524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.048900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.048906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.049415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.049442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.049856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.049864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.050080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.050087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.050372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.050379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.050770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.050778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.051168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.051175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.051602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.051608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.051974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.051981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.052180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.052189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.052578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.052593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.052822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.052829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.053124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.053131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.053530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.053536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.053949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.053955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.054435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.054462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.054675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.054684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.054907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.055379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.055390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.055613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.055619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.055884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.055891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.056291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.056298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.056671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.056677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.056909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.056915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.057335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.057342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.057545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.057553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.057742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.057749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.058030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.058037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.058329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.058336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-15 11:41:27.058581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.542 [2024-07-15 11:41:27.058588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.058982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.058988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.059355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.059362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.059428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.059433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.059849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.059856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.060243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.060250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.060635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.060641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.061059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.061065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.061262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.061268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.061620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.061626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.062002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.062009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.062413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.062420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.062784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.062790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.063274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.063280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.063657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.063663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.064039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.064045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.064300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.064306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.064699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.064705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.065068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.065074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.065482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.065488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.065853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.065859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.066116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.066126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.066501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.066508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.066800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.066807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.067198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.067205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.067486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.067492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.067880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.067887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.068297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.068303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.068750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.068756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.069171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.069181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.069684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.069690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.070057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.070064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.070455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.070462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.070853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.070860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.071274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.071281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.071646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.071653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.072048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.072055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.072277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.072283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.072674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.072689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.073082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.073088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.073307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.543 [2024-07-15 11:41:27.073313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-15 11:41:27.073521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.073528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.073928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.073935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.074347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.074353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.074718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.074725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.075060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.075075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.075307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.075314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.075529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.075535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.075612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.075618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.075970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.075976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.076348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.076355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.076753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.076760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.076978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.076985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.077183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.077191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.077388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.077394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.077781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.077787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.078206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.078213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.078457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.078463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.078903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.078909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.079295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.079301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.079662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.079668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.080051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.080058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.080468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.080475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.080863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.080871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.081260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.081266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.081720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.081726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.082139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.082146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.082510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.082517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.082811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.082817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.083260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.083266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.083502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.083509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.083893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.083899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.084265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.084271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.084564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.084570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.084953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.084959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.085158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.085165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.085613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.085619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.085810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.085816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.086180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.086186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.086443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.086449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.086718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.086724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-15 11:41:27.087094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.544 [2024-07-15 11:41:27.087100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.087453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.087460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.087848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.087855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.088059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.088066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.088442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.088449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.088836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.088843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.089254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.089260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.089635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.089641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.089861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.089868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.090255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.090261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.090644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.090650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.090835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.090841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.091028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.091035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.091442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.091449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.091662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.091669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.092049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.092057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.092433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.092439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.092813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.092819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.093235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.093242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.093654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.093660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.094028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.094034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.094424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.094431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.094842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.094848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.095224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.095237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.095744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.095751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.096164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.096171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.096559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.096566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.096767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.096774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.096988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.096995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.097363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.097369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.097756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.097762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.098185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.098191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.098364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.098370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.098732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.098739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.098993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.098999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.099465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.099472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.099564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.099570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.099887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.099894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.100097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.100103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.100504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.100511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.100907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.545 [2024-07-15 11:41:27.100914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.545 qpair failed and we were unable to recover it. 00:29:58.545 [2024-07-15 11:41:27.101165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.101173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.101476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.101483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.101871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.101877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.102238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.102245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.102636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.102643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.102913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.102920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.103343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.103350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.103636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.103643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.104029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.104035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.104427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.104433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.104707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.104713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.105102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.105109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.105480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.105486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.105652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.105658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.105894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.105911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.106295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.106301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.106678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.106684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.107059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.107065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.107445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.107453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.107858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.107864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.108276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.108283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.108664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.108670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.109081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.109087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.109295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.109302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.109744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.109750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.109918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.109925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.110263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.110270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.110510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.110517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.110901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.110908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.111301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.111308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.111687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.111693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.112110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.112116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.112515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.112522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.112909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.112915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.113292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.113298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.113753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.113759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.114002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.114008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.114381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.114388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.546 [2024-07-15 11:41:27.114761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.546 [2024-07-15 11:41:27.114767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.546 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.115141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.115148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.115339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.115776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.115783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.116073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.116080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.116469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.116476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.116846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.116852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.117267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.117275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.117525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.117532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.117735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.117743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.117960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.117967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.118230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.118236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.118488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.118495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.118704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.118711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.119063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.119069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.119441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.119448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.119831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.119838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.120036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.120042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.120443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.120449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.120894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.120900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.121271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.121278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.121692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.121698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.122088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.122095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.122345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.122352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.122625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.122631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.123011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.123017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.123454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.123460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.123667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.123673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.124031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.124038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.124425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.124432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.124722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.124729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.124929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.124937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.125312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.125318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.125703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.547 [2024-07-15 11:41:27.125710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.547 qpair failed and we were unable to recover it. 00:29:58.547 [2024-07-15 11:41:27.126120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.126130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.126501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.126508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.126879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.126885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.127253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.127259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.127572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.127579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.127967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.127973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.128351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.128357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.128715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.128721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.128921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.128927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.129293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.129299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.129697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.129704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.129892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.129898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.130249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.130256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.130670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.130677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.131041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.131048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.131270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.131276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.131685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.131691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.132054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.132060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.132235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.132242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.132497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.132505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.132660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.132666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.132932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.132939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.133366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.133374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.133634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.133641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.133966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.133972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.134246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.134253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.134653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.134659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.134948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.134955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.135364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.135370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.135741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.135748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.135812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.135818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.136168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.136174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.136575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.136582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.136786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.136792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.137051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.137057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.137451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.137457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.137847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.137854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.138231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.138237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.138587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.138594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.138983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.138989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.139366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.548 [2024-07-15 11:41:27.139373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.548 qpair failed and we were unable to recover it. 00:29:58.548 [2024-07-15 11:41:27.139575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.139581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.139767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.139775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.140141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.140147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.140549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.140555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.140947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.140954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.141344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.141350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.141715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.141722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.142007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.142013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.142413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.142420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.142619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.142625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.143027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.143034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.143240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.143246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.143605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.143612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.143774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.143781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.144185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.144192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.144443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.144449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.144734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.144740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.145139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.145145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.145329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.145336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.145555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.145567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.145910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.145916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.146112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.146124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.146367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.146373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.146773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.146780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.146999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.147005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.147383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.147390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.147796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.147803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.148213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.148219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.148605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.148611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.148793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.148800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.149247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.149253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.149634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.149641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.150057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.150064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.150438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.150444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.150717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.150723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.150977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.150984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.151344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.151350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.151642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.151649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.152040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.152046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.152247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.152254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.152657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.549 [2024-07-15 11:41:27.152663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.549 qpair failed and we were unable to recover it. 00:29:58.549 [2024-07-15 11:41:27.153039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.153045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.153422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.153428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.153817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.153823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.154210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.154217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.154594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.154600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.154970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.154976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.155392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.155399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.155767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.155774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.156195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.156201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.156603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.156610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.156993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.156999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.157409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.157416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.157614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.157620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.158021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.158028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.158421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.158428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.158837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.158844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.159266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.159272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.159656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.159663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.160034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.160041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.160407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.160414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.160806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.160814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.161180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.161186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.161585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.161592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.161966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.161972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.162341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.162347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.162731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.162738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.163200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.163206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.163583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.163589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.164007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.164013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.164375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.164388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.164772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.164778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.165184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.165191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.165590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.165992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.165999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.166418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.166424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.166798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.166804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.167201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.167208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.167620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.167627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.168001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.168302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.168309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.168523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.550 [2024-07-15 11:41:27.168530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.550 qpair failed and we were unable to recover it. 00:29:58.550 [2024-07-15 11:41:27.168878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.168886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.169260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.169267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.169643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.169650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.169881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.169887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.170298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.170305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.170689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.170696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.170948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.170955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.171348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.171355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.171596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.171603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.171823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.171829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.172235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.172241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.172435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.172440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.172654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.172660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.173045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.173052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.173323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.173330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.173583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.173589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.173846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.173853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.174267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.174275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.174541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.174548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.174935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.174944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.175330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.175336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.175602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.175609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.176002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.176008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.176373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.176380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.176601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.176607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.177021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.177028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.177396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.177403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.177795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.177801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.178175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.178182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.178580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.178587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.178977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.178983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.179394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.179400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.179783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.179789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.180066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.180072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.551 [2024-07-15 11:41:27.180466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.551 [2024-07-15 11:41:27.180473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.551 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.180668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.180675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.181045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.181052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.181460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.181467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.181878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.181884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.182094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.182101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.182470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.182478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.182768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.182775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.182981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.182988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.183374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.183381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.183744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.183750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.184166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.184173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.184468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.184475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.184724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.184732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.185132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.185139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.185531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.185538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.185957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.185963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.186377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.186383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.186752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.186760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.187139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.187146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.187433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.187440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.187846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.187853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.188224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.188230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.188614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.188622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.189008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.189015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.189425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.189434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.189649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.189656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.190023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.190030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.190412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.190419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.190751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.190758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.191144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.191151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.191541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.191548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.191961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.191968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.192334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.192341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.192722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.192729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.193125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.193131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.193533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.193539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.193931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.193938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.194327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.194355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.194742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.194750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.194834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.194842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.552 [2024-07-15 11:41:27.195217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.552 [2024-07-15 11:41:27.195225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.552 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.195652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.195659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.195831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.195837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.196332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.196339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.196742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.196749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.197141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.197148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.197554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.197562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.197788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.197794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.197979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.197987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.198226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.198233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.198627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.198635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.199065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.199073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.199500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.199508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.199899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.199905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.200116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.200126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.200492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.200499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.200869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.200875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.201289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.201295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.201678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.201685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.202073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.202080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.202534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.202541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.202913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.202920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.203354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.203382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.203848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.203857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.204360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.204391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.204774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.204782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.205011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.205017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.205438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.205446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.205732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.205740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.205995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.206001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.206379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.206387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.206780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.206787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.207090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.207097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.207511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.207518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.207887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.207894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.208107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.208113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.208506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.208513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.208888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.208895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.209385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.209412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.209807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.209815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.553 [2024-07-15 11:41:27.210078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.553 [2024-07-15 11:41:27.210085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.553 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.210479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.210486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.210858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.210866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.211359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.211387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.211775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.211783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.212157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.212164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.212570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.212577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.212976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.212983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.213345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.213352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.213713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.213719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.214097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.214104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.214476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.214483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.214893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.214900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.215100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.215106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.215500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.215507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.215914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.215920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.216404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.216431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.554 [2024-07-15 11:41:27.216638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.554 [2024-07-15 11:41:27.216647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.554 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.217052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.217061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.217356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.217363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.217777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.217784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.217982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.217990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.218376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.218382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.218795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.218803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.219139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.219151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.219373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.219379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.219767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.219773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.219999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.220006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.220331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.220338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.220778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.220784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.221046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.221052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.221449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.221456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.221872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.221878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.222266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.222273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.222672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.222678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.222918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.222925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.223337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.223343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.223557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.223564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.223959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.223965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.224253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.826 [2024-07-15 11:41:27.224260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.826 qpair failed and we were unable to recover it. 00:29:58.826 [2024-07-15 11:41:27.224709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.224716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.224794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.224800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.225177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.225184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.225542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.225548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.225762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.225768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.225943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.225951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.226352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.226359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.226726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.226732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.227155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.227161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.227551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.227558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.227932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.227939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.228147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.228153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.228218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.228224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.228591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.228597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.228971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.228978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.229231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.229237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.229621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.229627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.229990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.229996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.230493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.230499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.230777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.230792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.231183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.231190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.231557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.231564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.231963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.231970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.232363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.232370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.232767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.232775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.233180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.233188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.233601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.233607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.234021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.234027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.234427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.234433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.234808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.234814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.234877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.234883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.235236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.235243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.235432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.235438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.235867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.235873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.236245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.236251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.236425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.236432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.236695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.236701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.237080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.237086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.237458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.237465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.237828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.237835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.827 qpair failed and we were unable to recover it. 00:29:58.827 [2024-07-15 11:41:27.238233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.827 [2024-07-15 11:41:27.238239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.238596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.238603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.238993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.238999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.239419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.239426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.239821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.239828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.240265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.240272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.240703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.240710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.240956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.240963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.241365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.241372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.241620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.241626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.242020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.242026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.242243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.242548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.242556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.242778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.242784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.243033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.243040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.243228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.243235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.243719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.243725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.244089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.244095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.244514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.244521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.244729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.244736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.245068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.245075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.245356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.245363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.245827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.245834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.246197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.246204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.246608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.246617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.247008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.247015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.247424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.247431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.247647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.247653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.247842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.247854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.248225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.248232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.248289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.248296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.248637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.248644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.249083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.249090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.249507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.249513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.249893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.249899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.250111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.250117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.250513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.250520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.250804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.250810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.251187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.251194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.251581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.828 [2024-07-15 11:41:27.251588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.828 qpair failed and we were unable to recover it. 00:29:58.828 [2024-07-15 11:41:27.251958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.251964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.252257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.252264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.252693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.252700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.253110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.253117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.253327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.253333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.253509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.253516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.253805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.253811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.254261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.254267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.254662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.254669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.254923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.254929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.255354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.255360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.255730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.255737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.256071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.256078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.256469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.256477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.256684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.256691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.257060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.257067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.257458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.257464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.257713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.257719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.257949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.257956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.258146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.258152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.258513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.258520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.258807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.258814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.259201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.259208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.259415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.259421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.259620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.259626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.260020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.260026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.260417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.260423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.260831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.260838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.261198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.261205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.261619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.261625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.262015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.262023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.262228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.262237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.262625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.262632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.262929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.262936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.263313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.263319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.263729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.263735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.264146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.264153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.264545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.264552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.264916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.264922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.265133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.265139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.829 [2024-07-15 11:41:27.265553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.829 [2024-07-15 11:41:27.265559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.829 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.265933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.265940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.266158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.266166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.266356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.266363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.266606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.266612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.266987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.266993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.267363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.267370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.267757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.267764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.268013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.268021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.268413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.268420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.268794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.268801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.269185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.269194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.269574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.269580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.269945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.269951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.270321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.270328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.270731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.270737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.271130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.271137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.271492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.271500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.271887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.271895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.272283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.272290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.272664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.272670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.273078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.273084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.273499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.273505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.273874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.273881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.274269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.274277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.274665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.274671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.830 [2024-07-15 11:41:27.275051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.830 [2024-07-15 11:41:27.275058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.830 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.275450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.275457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.275662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.275668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.276026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.276033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.276436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.276442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.276850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.276856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.277224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.277231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.277629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.277635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.278027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.278034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.278434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.278441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.278807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.278814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.279199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.279206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.279622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.279629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.280016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.280023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.280414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.280421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.280629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.280637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.280882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.280889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.281269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.281277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.281655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.281662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.282065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.282072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.282350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.282357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.282562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.282569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.282647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.831 [2024-07-15 11:41:27.282653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.831 qpair failed and we were unable to recover it. 00:29:58.831 [2024-07-15 11:41:27.283016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.283022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.283414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.283421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.283831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.283840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.284256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.284263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.284476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.284482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.284859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.284866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.285154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.285161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.285560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.285567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.285944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.285950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.286321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.286329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.286731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.286738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.287104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.287112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.287583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.287591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.287975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.287983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.288347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.288375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.288822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.288831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.289352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.289379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.289842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.290230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.290238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.290653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.290661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.291038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.291045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.291504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.291512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.291892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.291899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.292318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.292325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.292722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.292728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.293171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.293179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.293500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.293508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.293901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.293908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.294282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.294289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.294683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.294691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.294895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.294902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.295261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.295269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.295676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.295683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.296047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.296053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.296456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.296463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.296873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.296880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.297274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.297281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.297482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.297489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.297921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.297928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.298325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.298332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.832 [2024-07-15 11:41:27.298719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.832 [2024-07-15 11:41:27.298726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.832 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.299091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.299099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.299470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.299480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.299888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.299894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.299972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.300153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.300160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.300444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.300452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.300863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.300870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.301252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.301260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.301631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.301637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.302012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.302019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.302452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.302459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.302825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.302832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.303159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.303166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.303591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.303598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.303976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.303983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.304370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.304378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.304820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.304827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.305033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.305040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.305398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.305405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.305776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.305784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.306167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.306174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.306387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.306395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.306758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.306765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.307135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.307141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.307543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.307550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.307946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.307953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.308350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.308357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.308420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.308426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.308779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.308786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.309070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.309076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.309471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.309478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.309672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.309681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.310090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.310098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.310341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.310348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.310787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.310794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.311168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.311175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.311552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.311559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.311759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.311766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.312164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.312171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.312466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.312473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.833 qpair failed and we were unable to recover it. 00:29:58.833 [2024-07-15 11:41:27.312766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.833 [2024-07-15 11:41:27.312773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.313190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.313199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.313601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.313609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.313788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.313795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.314168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.314176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.314573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.314579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.314944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.314950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.315372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.315379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.315773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.315780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.316176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.316183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.316381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.316387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.316790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.316797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.317205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.317212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.317415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.317422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.317823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.317829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.318280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.318287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.318679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.318686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.319107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.319115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.319331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.319338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.319720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.319727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.320134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.320142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.320392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.320401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.320811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.320818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.321227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.321235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.321487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.321495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.321918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.321926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.322316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.322323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.322584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.322590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.323006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.323013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.323497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.323867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.323874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.324127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.324134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.324516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.324524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.324808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.324816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.325234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.325241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.325512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.325519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.325913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.325920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.326339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.326345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.326653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.326660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.327047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.327053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.327418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.834 [2024-07-15 11:41:27.327425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.834 qpair failed and we were unable to recover it. 00:29:58.834 [2024-07-15 11:41:27.327702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.327711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.328101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.328107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.328486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.328493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.328893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.328900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.329162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.329169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.329590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.329597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.329877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.329883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.330096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.330103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.330279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.330286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.330349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.330356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.330597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.330603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.331000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.331007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.331391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.331398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.331787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.331793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.332037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.332043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.332339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.332345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.332536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.332545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.332899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.332906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.333194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.333200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.333556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.333563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.333953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.333960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.334151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.334159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.334362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.334369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.334766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.334773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.335150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.335157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.335573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.335580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.335981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.335988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.336351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.336358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.336570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.336577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.336946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.336952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.337157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.337165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.337551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.337558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.337940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.337946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.835 qpair failed and we were unable to recover it. 00:29:58.835 [2024-07-15 11:41:27.338360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.835 [2024-07-15 11:41:27.338368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.338586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.338593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.338810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.338817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.339194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.339201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.339497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.339504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.339854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.339861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.340233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.340240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.340422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.340431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.340510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.340516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.340869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.340878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.341284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.341291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.341580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.341588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.341973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.341979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.342382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.342389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.342830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.342836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.343137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.343144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.343543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.343550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.343917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.343924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.344324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.344330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.344686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.344693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.345088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.345094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.345443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.345450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.345840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.345846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.346044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.346052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.346277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.346285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.346686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.346693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.347063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.347070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.347291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.347297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.347710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.347716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.347972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.347978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.348395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.348772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.348987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.348994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.349315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.349322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.349766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.349772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.349920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.349927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.350463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.350470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.350867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.350874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.351245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.351251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.351536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.351544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.351926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.836 [2024-07-15 11:41:27.351932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.836 qpair failed and we were unable to recover it. 00:29:58.836 [2024-07-15 11:41:27.352312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.352319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.352492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.352498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.352896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.352903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.353322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.353329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.353743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.353749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.354115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.354124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.354533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.354542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.355021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.355028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.355434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.355441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.355730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.355737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.356109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.356115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.356435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.356442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.356652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.356659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.357023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.357030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.357235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.357242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.357595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.357602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.357952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.357960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.837 [2024-07-15 11:41:27.358213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.358222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:58.837 [2024-07-15 11:41:27.358439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.358448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.837 [2024-07-15 11:41:27.358897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.358905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.837 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.837 [2024-07-15 11:41:27.359363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.359371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.359784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.359791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.360079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.360087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.360468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.360475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.360845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.360851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.361237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.361243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.361649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.361655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.362068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.362076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.362367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.362376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.362777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.362784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.363195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.363202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.363388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.363397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.363805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.363812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.364191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.364198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.364640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.364646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.364941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.364948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.365222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.365229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.365673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.365681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.366073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.837 [2024-07-15 11:41:27.366082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.837 qpair failed and we were unable to recover it. 00:29:58.837 [2024-07-15 11:41:27.366449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.366457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.366576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.366582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.366928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.366935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.367187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.367195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.367545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.367552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.367942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.367951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.368336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.368343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.368757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.368764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.368975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.368981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.369320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.369326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.369515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.369523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.369879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.369886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.370254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.370261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.370686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.370693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.370900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.370907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.371238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.371246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.371663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.371670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.372111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.372119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.372380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.372388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.372776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.372783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.373168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.373176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.373557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.373564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.373929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.373936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.374182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.374189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.374583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.374591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.374956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.374962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.375239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.375247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.375325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.375703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.375710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.375977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.375985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.376396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.376402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.376783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.376790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.377164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.377171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.377562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.377569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.377983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.377990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.378401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.378408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.378817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.378825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.379110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.379117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.379538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.379546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.379802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.379810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.838 [2024-07-15 11:41:27.380197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.838 [2024-07-15 11:41:27.380205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.838 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.380599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.380606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.381000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.381007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.381416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.381424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.381790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.381797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.382082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.382090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.382474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.382480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.382724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.382731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.382939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.382947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.383380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.383387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.383787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.383794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.384182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.384189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.384562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.384569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.385088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.385094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.385459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.385466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.385698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.385705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.386071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.386080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.386349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.386357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.386625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.386632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.387032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.387039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.387436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.387442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.387697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.387704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.388085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.388093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.388307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.388314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.388378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.388384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.388816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.388823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.389232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.389239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.389452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.389459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.389889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.389896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.390150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.390157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.390497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.390893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.390900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.391195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.391203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.391626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.391633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.391796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.391803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.392124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.392130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.392337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.392343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.392521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.392527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.839 qpair failed and we were unable to recover it. 00:29:58.839 [2024-07-15 11:41:27.392701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.839 [2024-07-15 11:41:27.392708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.393088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.393095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.393471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.393478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.393904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.393911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.394109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.394116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.394511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.394518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.394776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.394783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.394967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.394975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.395375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.395387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.395790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.395796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.396197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.396203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.396599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.396606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.396988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.396995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.397296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.397303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.397692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.397698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.840 [2024-07-15 11:41:27.398178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.398187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.398412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.398419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.840 [2024-07-15 11:41:27.398635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.398642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.840 [2024-07-15 11:41:27.399028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.399036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.840 [2024-07-15 11:41:27.399418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.399426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.399813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.399820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.400231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.400238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.400589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.400595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.400935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.400941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.401139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.401146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.401511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.401519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.401988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.401994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.402441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.402448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.402524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.402530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.402886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.402893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.403146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.403152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.403544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.403550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.403793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.403801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.404024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.404031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.404398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.404408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.404778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.404784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.405072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.840 [2024-07-15 11:41:27.405078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.840 qpair failed and we were unable to recover it. 00:29:58.840 [2024-07-15 11:41:27.405345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.405351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.405762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.405768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.406134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.406141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.406338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.406345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.406690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.406696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.407083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.407090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.407326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.407333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.407725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.407731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.408116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.408125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.408590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.408597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.409017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.409023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.409267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.409274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.409663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.409670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.410121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.410130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.410368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.410374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.410760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.410768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.411156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.411163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.411558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.411565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.411938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.411944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.412333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.412341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.412758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.413220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.413226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.413462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.413469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.413855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.413862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.414228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.414234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.414420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.414427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.414879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.414886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.415298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.415304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.415729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.415735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.416024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.416030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.416431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.416439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.416851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.416858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.417236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 Malloc0 00:29:58.841 [2024-07-15 11:41:27.417530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.417537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.417738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.417745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.417990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.418000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.841 [2024-07-15 11:41:27.418417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.418424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.841 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.841 [2024-07-15 11:41:27.418798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.418805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.841 [2024-07-15 11:41:27.419183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.841 [2024-07-15 11:41:27.419191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.841 qpair failed and we were unable to recover it. 00:29:58.841 [2024-07-15 11:41:27.419583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.419589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.419982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.419989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.420202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.420209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.420395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.420402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.420675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.420681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.421110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.421117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.421538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.421545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.421912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.421918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.422339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.422346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.422699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.422705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.422894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.422901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.423285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.423292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.423681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.423688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.424074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.424081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.424463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.424470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.424721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.424727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.424783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.842 [2024-07-15 11:41:27.425007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.425013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.425472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.425479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.425867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.425874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.426264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.426270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.426660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.426666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.427013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.427019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.427439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.427812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.427819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.428234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.428240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.428617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.428624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.428841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.428847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.429269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.429275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.429646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.429653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.430064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.430071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.430463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.430469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.430876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.430883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.431004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.431010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.431437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.431445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.431856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.431864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.432189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.432196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.432607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.432614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.433005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.433011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.433461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.433468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 [2024-07-15 11:41:27.433749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.842 [2024-07-15 11:41:27.433756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.842 qpair failed and we were unable to recover it. 00:29:58.842 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.842 [2024-07-15 11:41:27.434165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.434173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.843 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.843 [2024-07-15 11:41:27.434559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.434565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.843 [2024-07-15 11:41:27.434737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.434744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.434957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.434964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.435223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.435230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.435510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.435516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.435959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.435966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.436343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.436350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.436599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.436606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.436933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.436940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.437347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.437353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.437602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.437609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.437852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.437859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.438229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.438235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.438611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.438621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.439008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.439014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.439386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.439393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.439610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.439616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.439904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.439911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.440131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.440139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.440566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.440573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.440967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.440974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.441365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.441372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.441567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.441575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.441979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.441986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.442362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.442370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.442639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.442646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.443038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.443044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.443420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.443427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.443693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.443699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.443967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.443973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.444375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.444381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.444666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.444678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.444967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.444975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.445416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.445423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.445632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.445639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.843 [2024-07-15 11:41:27.445891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.445898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 [2024-07-15 11:41:27.446080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.446086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.843 [2024-07-15 11:41:27.446408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.843 [2024-07-15 11:41:27.446416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.843 qpair failed and we were unable to recover it. 00:29:58.843 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.844 [2024-07-15 11:41:27.446669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.446675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.844 [2024-07-15 11:41:27.446960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.446967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.447358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.447365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.447733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.447740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.447951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.447957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.448379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.448388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.448579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.448586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.448996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.449003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.449207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.449214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.449621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.449627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.449994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.450000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.450392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.450399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.450608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.450614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.451031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.451039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.451334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.451341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.451557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.451563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.451960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.451967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.452356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.452363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.452575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.452581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.452973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.452980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.453236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.453244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.453638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.453645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.454031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.454038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.454422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.454428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.454633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.454639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.455001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.455007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.455378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.455384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.455593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.455600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.455950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.455956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.456359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.456366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.456733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.456739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.456961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.456967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.457339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.457345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.844 [2024-07-15 11:41:27.457734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.844 [2024-07-15 11:41:27.457741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.844 qpair failed and we were unable to recover it. 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.845 [2024-07-15 11:41:27.458121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.458131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.845 [2024-07-15 11:41:27.458363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.458369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.845 [2024-07-15 11:41:27.458723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.458731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.845 [2024-07-15 11:41:27.459113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.459120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.459535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.459543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.459757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.459763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.459947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.459955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.460165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.460172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.460575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.460581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.461012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.461018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.461524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.461532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.461908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.461915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.462300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.462307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.462761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.462767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.463134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.463141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.463526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.463533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.463979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.463986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.464336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.464343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.464547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.464553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.464921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.845 [2024-07-15 11:41:27.464928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7fb0000b90 with addr=10.0.0.2, port=4420 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.465047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.845 [2024-07-15 11:41:27.475613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.845 [2024-07-15 11:41:27.475697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.845 [2024-07-15 11:41:27.475715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.845 [2024-07-15 11:41:27.475720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.845 [2024-07-15 11:41:27.475725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:58.845 [2024-07-15 11:41:27.475740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.845 11:41:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3738598 00:29:58.845 [2024-07-15 11:41:27.485571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.845 [2024-07-15 11:41:27.485654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.845 [2024-07-15 11:41:27.485667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.845 [2024-07-15 11:41:27.485672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.845 [2024-07-15 11:41:27.485676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:58.845 [2024-07-15 11:41:27.485688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.495715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.845 [2024-07-15 11:41:27.495783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.845 [2024-07-15 11:41:27.495795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.845 [2024-07-15 11:41:27.495800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.845 [2024-07-15 11:41:27.495804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:58.845 [2024-07-15 11:41:27.495816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.505555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.845 [2024-07-15 11:41:27.505651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.845 [2024-07-15 11:41:27.505663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.845 [2024-07-15 11:41:27.505670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.845 [2024-07-15 11:41:27.505674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:58.845 [2024-07-15 11:41:27.505685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.845 qpair failed and we were unable to recover it. 00:29:58.845 [2024-07-15 11:41:27.515567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.845 [2024-07-15 11:41:27.515638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.845 [2024-07-15 11:41:27.515651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.845 [2024-07-15 11:41:27.515658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.845 [2024-07-15 11:41:27.515662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:58.845 [2024-07-15 11:41:27.515673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.845 qpair failed and we were unable to recover it. 00:29:59.108 [2024-07-15 11:41:27.525603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.108 [2024-07-15 11:41:27.525667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.108 [2024-07-15 11:41:27.525679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.108 [2024-07-15 11:41:27.525684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.108 [2024-07-15 11:41:27.525688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.108 [2024-07-15 11:41:27.525699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.108 qpair failed and we were unable to recover it. 00:29:59.108 [2024-07-15 11:41:27.535539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.108 [2024-07-15 11:41:27.535606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.108 [2024-07-15 11:41:27.535619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.108 [2024-07-15 11:41:27.535624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.108 [2024-07-15 11:41:27.535628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.108 [2024-07-15 11:41:27.535640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.108 qpair failed and we were unable to recover it. 00:29:59.108 [2024-07-15 11:41:27.545638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.108 [2024-07-15 11:41:27.545706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.108 [2024-07-15 11:41:27.545718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.108 [2024-07-15 11:41:27.545723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.108 [2024-07-15 11:41:27.545727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.108 [2024-07-15 11:41:27.545738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.108 qpair failed and we were unable to recover it. 00:29:59.108 [2024-07-15 11:41:27.555679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.108 [2024-07-15 11:41:27.555752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.108 [2024-07-15 11:41:27.555764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.108 [2024-07-15 11:41:27.555768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.108 [2024-07-15 11:41:27.555773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.108 [2024-07-15 11:41:27.555783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.108 qpair failed and we were unable to recover it. 00:29:59.108 [2024-07-15 11:41:27.565687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.108 [2024-07-15 11:41:27.565756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.108 [2024-07-15 11:41:27.565775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.108 [2024-07-15 11:41:27.565781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.108 [2024-07-15 11:41:27.565785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.108 [2024-07-15 11:41:27.565799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.108 qpair failed and we were unable to recover it. 00:29:59.108 [2024-07-15 11:41:27.575713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.575775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.575788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.575793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.575798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.575809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.585754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.585817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.585830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.585835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.585839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.585850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.595820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.595896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.595915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.595921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.595925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.595939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.605776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.605839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.605861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.605867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.605872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.605885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.615856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.615930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.615948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.615954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.615959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.615973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.625870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.625933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.625946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.625951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.625956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.625967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.635905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.635981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.636000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.636006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.636010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.636024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.645954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.646016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.646029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.646035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.646039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.646054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.655986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.656085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.656097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.656103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.656107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.656118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.665970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.666038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.666050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.666055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.666059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.666070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.676021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.676088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.676100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.676105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.676110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.676120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.686065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.686129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.686141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.686146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.686151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.686161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.696073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.696146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.696161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.696166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.696170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.696181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.706079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.706148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.706160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.109 [2024-07-15 11:41:27.706165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.109 [2024-07-15 11:41:27.706169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.109 [2024-07-15 11:41:27.706181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.109 qpair failed and we were unable to recover it. 00:29:59.109 [2024-07-15 11:41:27.716136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.109 [2024-07-15 11:41:27.716209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.109 [2024-07-15 11:41:27.716220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.716225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.716229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.716240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.726218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.726323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.726335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.726340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.726344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.726355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.736198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.736260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.736271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.736276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.736283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.736294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.746298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.746384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.746395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.746400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.746404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.746415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.756260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.756365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.756377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.756381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.756386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.756396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.766382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.766448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.766460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.766465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.766469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.766480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.776427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.776524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.776536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.776541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.776546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.776557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.786389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.786491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.786502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.786507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.786511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.786522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.796345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.796415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.796427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.796432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.796436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.796447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.110 [2024-07-15 11:41:27.806380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.110 [2024-07-15 11:41:27.806450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.110 [2024-07-15 11:41:27.806462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.110 [2024-07-15 11:41:27.806467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.110 [2024-07-15 11:41:27.806472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.110 [2024-07-15 11:41:27.806483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.110 qpair failed and we were unable to recover it. 00:29:59.372 [2024-07-15 11:41:27.816381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.372 [2024-07-15 11:41:27.816445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.372 [2024-07-15 11:41:27.816457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.372 [2024-07-15 11:41:27.816461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.372 [2024-07-15 11:41:27.816466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.372 [2024-07-15 11:41:27.816476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.372 qpair failed and we were unable to recover it. 00:29:59.372 [2024-07-15 11:41:27.826444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.372 [2024-07-15 11:41:27.826509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.372 [2024-07-15 11:41:27.826520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.372 [2024-07-15 11:41:27.826525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.372 [2024-07-15 11:41:27.826533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.372 [2024-07-15 11:41:27.826543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.372 qpair failed and we were unable to recover it. 00:29:59.372 [2024-07-15 11:41:27.836515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.372 [2024-07-15 11:41:27.836586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.372 [2024-07-15 11:41:27.836598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.372 [2024-07-15 11:41:27.836602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.372 [2024-07-15 11:41:27.836607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.372 [2024-07-15 11:41:27.836617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.372 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.846474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.846537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.846548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.846553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.846557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.846568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.856531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.856611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.856622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.856627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.856631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.856642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.866581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.866646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.866658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.866663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.866667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.866677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.876570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.876639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.876652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.876656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.876660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.876671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.886670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.886730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.886742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.886746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.886751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.886761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.896519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.896583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.896594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.896599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.896603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.896613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.906669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.906732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.906743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.906747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.906752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.906762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.916681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.916751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.916763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.916773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.916778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.916788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.926736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.926798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.926811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.926816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.926820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.926831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.936751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.936813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.936824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.936829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.936833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.936844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.946787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.946856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.946875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.946881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.946885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.946899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.956852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.956924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.956942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.956948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.956953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.956967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.966839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.966907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.966926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.966932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.966936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.373 [2024-07-15 11:41:27.966950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.373 qpair failed and we were unable to recover it. 00:29:59.373 [2024-07-15 11:41:27.976854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.373 [2024-07-15 11:41:27.976922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.373 [2024-07-15 11:41:27.976940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.373 [2024-07-15 11:41:27.976946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.373 [2024-07-15 11:41:27.976951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:27.976965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:27.986881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:27.986949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:27.986962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:27.986967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:27.986971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:27.986983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:27.996920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:27.997007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:27.997020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:27.997024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:27.997029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:27.997042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.006932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.007002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.007018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.007023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.007027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.007039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.016969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.017032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.017044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.017049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.017053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.017064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.026996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.027061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.027073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.027078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.027082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.027093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.036931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.037016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.037026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.037031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.037035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.037046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.047047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.047114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.047130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.047135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.047139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.047153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.057082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.057157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.057169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.057173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.057178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.057188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-07-15 11:41:28.067066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-07-15 11:41:28.067224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-07-15 11:41:28.067236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-07-15 11:41:28.067241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-07-15 11:41:28.067245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.374 [2024-07-15 11:41:28.067256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.637 [2024-07-15 11:41:28.077142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.077210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.077222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.077227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.077231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.077242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.087166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.087228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.087240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.087245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.087249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.087260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.097202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.097262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.097276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.097281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.097285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.097297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.107233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.107300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.107312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.107317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.107321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.107333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.117264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.117338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.117349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.117354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.117358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.117369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.127311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.127379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.127391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.127396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.127400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.127410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.137316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.137381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.137393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.137397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.137401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.137416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.147332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.147399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.147411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.147416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.147420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.147431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.157355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.157423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.157434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.157439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.157443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.157454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.167427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.167494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.167505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.167510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.167515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.167526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.177429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.177493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.177504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.177509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.177513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.177524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.187443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.187516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.187527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-07-15 11:41:28.187532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-07-15 11:41:28.187536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.638 [2024-07-15 11:41:28.187547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-07-15 11:41:28.197471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-07-15 11:41:28.197539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-07-15 11:41:28.197550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.197555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.197559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.197570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.207499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.207562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.207574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.207578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.207583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.207593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.217540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.217602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.217614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.217618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.217622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.217633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.227593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.227686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.227697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.227702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.227709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.227720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.237598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.237667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.237678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.237683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.237687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.237698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.247623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.247683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.247695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.247700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.247704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.247714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.257554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.257622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.257633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.257638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.257642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.257652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.267678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.267747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.267758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.267763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.267767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.267778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.277711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.277778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.277790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.277796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.277800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.277810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.287718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.287786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.287797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.287802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.287807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.287817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.297764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.297835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.297853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.297859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.297864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.297878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.307834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.307911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.307929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.639 [2024-07-15 11:41:28.307936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.639 [2024-07-15 11:41:28.307940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.639 [2024-07-15 11:41:28.307954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.639 qpair failed and we were unable to recover it. 00:29:59.639 [2024-07-15 11:41:28.317811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.639 [2024-07-15 11:41:28.317884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.639 [2024-07-15 11:41:28.317903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.640 [2024-07-15 11:41:28.317912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.640 [2024-07-15 11:41:28.317917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.640 [2024-07-15 11:41:28.317931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.640 qpair failed and we were unable to recover it. 00:29:59.640 [2024-07-15 11:41:28.327917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.640 [2024-07-15 11:41:28.328024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.640 [2024-07-15 11:41:28.328042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.640 [2024-07-15 11:41:28.328048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.640 [2024-07-15 11:41:28.328053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.640 [2024-07-15 11:41:28.328067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.640 qpair failed and we were unable to recover it. 00:29:59.640 [2024-07-15 11:41:28.337871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.640 [2024-07-15 11:41:28.337936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.640 [2024-07-15 11:41:28.337949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.640 [2024-07-15 11:41:28.337954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.640 [2024-07-15 11:41:28.337959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.640 [2024-07-15 11:41:28.337970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.640 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.347908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.347970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.347982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.347987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.347991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.348002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.357913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.357981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.357993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.357998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.358002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.358013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.367946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.368010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.368021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.368026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.368030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.368041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.377874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.377936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.377948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.377953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.377957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.377968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.387903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.387968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.387979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.387984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.387988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.387999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.398037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.398108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.398120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.398127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.398132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.398142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.408051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.408135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.408150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.408155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.408159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.408171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.418093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.418157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.418169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.418174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.418178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.418189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.428117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.428184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.428195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.428200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.428205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.428216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.438130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.438204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.438216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.438221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.438225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.438236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.448288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.448353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.448365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.448370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.448374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.448388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.458227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.458285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.458297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.458302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.458306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.458317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.468199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.468294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.902 [2024-07-15 11:41:28.468306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.902 [2024-07-15 11:41:28.468310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.902 [2024-07-15 11:41:28.468315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.902 [2024-07-15 11:41:28.468326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.902 qpair failed and we were unable to recover it. 00:29:59.902 [2024-07-15 11:41:28.478253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.902 [2024-07-15 11:41:28.478320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.478332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.478337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.478341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.478351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.488264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.488328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.488339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.488344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.488348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.488359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.498306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.498367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.498381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.498386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.498390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.498400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.508367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.508435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.508447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.508452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.508456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.508466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.518352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.518418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.518429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.518434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.518439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.518450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.528379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.528468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.528480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.528484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.528489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.528499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.538410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.538473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.538485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.538490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.538494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.538507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.548456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.548520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.548532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.548537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.548541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.548552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.558467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.558537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.558548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.558553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.558557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.558568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.568476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.568576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.568588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.568593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.568597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.568608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.578510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.578578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.578589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.578594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.578598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.578609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.588619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.588720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.588735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.588740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.588744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.588755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:29:59.903 [2024-07-15 11:41:28.598584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.903 [2024-07-15 11:41:28.598648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.903 [2024-07-15 11:41:28.598660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.903 [2024-07-15 11:41:28.598665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.903 [2024-07-15 11:41:28.598669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:29:59.903 [2024-07-15 11:41:28.598679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.903 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.608694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.608760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.608771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.608776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.608781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.608791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.618636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.618696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.618707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.618712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.618716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.618727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.628676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.628776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.628795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.628801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.628813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.628829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.638571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.638650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.638669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.638675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.638679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.638693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.648698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.648762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.648775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.648780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.648785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.648796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.658736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.658809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.658828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.658834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.658838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.658852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.668687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.668764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.668777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.668782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.668786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.668797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.678791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.678859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.678871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.678876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.678881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.678892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.688725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.688784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.688797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.688802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.688806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.688817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.698834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.698904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.698922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.698928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.698932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.698946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.708885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.708955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.708974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.708980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.708984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.708998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.718921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.718988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.719001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.719009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.719014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.719025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.728828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.728934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.728946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.728951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.728955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.728967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.738975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.739039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.739051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.739056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.739060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.739071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.748984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.749047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.749058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.749063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.749067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.749078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.759016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.759084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.759095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.759100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.759104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.759115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.769086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.165 [2024-07-15 11:41:28.769197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.165 [2024-07-15 11:41:28.769209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.165 [2024-07-15 11:41:28.769214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.165 [2024-07-15 11:41:28.769219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.165 [2024-07-15 11:41:28.769230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.165 qpair failed and we were unable to recover it. 00:30:00.165 [2024-07-15 11:41:28.779084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.779147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.779159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.779164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.779168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.779179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.789104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.789175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.789187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.789192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.789196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.789207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.799129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.799206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.799217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.799222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.799226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.799237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.809164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.809228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.809240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.809248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.809252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.809262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.819184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.819249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.819261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.819266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.819270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.819281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.829229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.829314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.829326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.829331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.829335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.829345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.839130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.839199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.839210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.839215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.839220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.839230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.849273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.849364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.849376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.849381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.849385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.849396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.166 [2024-07-15 11:41:28.859301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.166 [2024-07-15 11:41:28.859368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.166 [2024-07-15 11:41:28.859380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.166 [2024-07-15 11:41:28.859385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.166 [2024-07-15 11:41:28.859389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.166 [2024-07-15 11:41:28.859400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.166 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.869334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.869414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.869425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.869430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.869435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.869446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.879374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.879445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.879456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.879461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.879465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.879476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.889394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.889454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.889466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.889471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.889475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.889486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.899417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.899482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.899497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.899502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.899506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.899517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.909485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.909554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.909566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.909570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.909575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.909585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.919461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.919528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.919540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.919545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.919549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.919559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.929517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.929578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.929589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.929594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.929598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.929609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.939524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.939585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.939597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.939601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.939606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.939619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.949672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.949738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.949749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.949754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.949759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.949769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.959656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.959758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.959770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.959775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.959779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.959790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.969643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.969722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.969734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.969739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.969744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.969754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.979567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.979658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.979670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.979675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.979679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.979689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.989692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.989804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.989818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.989823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.989827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.989837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:28.999708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:28.999780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:28.999798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:28.999805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:28.999809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:28.999823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:29.009735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:29.009811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:29.009830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:29.009836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:29.009841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:29.009855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:29.019754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:29.019820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:29.019838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:29.019844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:29.019848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:29.019862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:29.029677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:29.029748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:29.029760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:29.029765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:29.029773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:29.029784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:29.039815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:29.039885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:29.039897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:29.039902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:29.039906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:29.039917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.428 qpair failed and we were unable to recover it. 00:30:00.428 [2024-07-15 11:41:29.049857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.428 [2024-07-15 11:41:29.049919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.428 [2024-07-15 11:41:29.049930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.428 [2024-07-15 11:41:29.049935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.428 [2024-07-15 11:41:29.049939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.428 [2024-07-15 11:41:29.049950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.059868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.059929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.059941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.059946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.059950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.059961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.069888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.069952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.069964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.069969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.069973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.069984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.079921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.079995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.080008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.080012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.080017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.080027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.089931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.089992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.090004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.090009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.090013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.090023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.099945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.100008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.100020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.100025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.100029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.100039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.110089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.110188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.110200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.110205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.110209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.110220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.429 [2024-07-15 11:41:29.120093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.429 [2024-07-15 11:41:29.120208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.429 [2024-07-15 11:41:29.120220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.429 [2024-07-15 11:41:29.120228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.429 [2024-07-15 11:41:29.120232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.429 [2024-07-15 11:41:29.120243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.429 qpair failed and we were unable to recover it. 00:30:00.691 [2024-07-15 11:41:29.130054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.691 [2024-07-15 11:41:29.130124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.691 [2024-07-15 11:41:29.130137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.691 [2024-07-15 11:41:29.130142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.691 [2024-07-15 11:41:29.130146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.691 [2024-07-15 11:41:29.130157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.691 qpair failed and we were unable to recover it. 00:30:00.691 [2024-07-15 11:41:29.140072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.691 [2024-07-15 11:41:29.140142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.691 [2024-07-15 11:41:29.140154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.691 [2024-07-15 11:41:29.140159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.691 [2024-07-15 11:41:29.140163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.691 [2024-07-15 11:41:29.140174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.691 qpair failed and we were unable to recover it. 00:30:00.691 [2024-07-15 11:41:29.150030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.691 [2024-07-15 11:41:29.150151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.691 [2024-07-15 11:41:29.150163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.691 [2024-07-15 11:41:29.150168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.691 [2024-07-15 11:41:29.150172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.691 [2024-07-15 11:41:29.150183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.691 qpair failed and we were unable to recover it. 00:30:00.691 [2024-07-15 11:41:29.160195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.691 [2024-07-15 11:41:29.160307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.691 [2024-07-15 11:41:29.160319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.691 [2024-07-15 11:41:29.160324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.691 [2024-07-15 11:41:29.160328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.691 [2024-07-15 11:41:29.160339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.691 qpair failed and we were unable to recover it. 00:30:00.691 [2024-07-15 11:41:29.170047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.691 [2024-07-15 11:41:29.170107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.691 [2024-07-15 11:41:29.170119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.691 [2024-07-15 11:41:29.170127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.691 [2024-07-15 11:41:29.170132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.691 [2024-07-15 11:41:29.170142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.691 qpair failed and we were unable to recover it. 00:30:00.691 [2024-07-15 11:41:29.180239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.691 [2024-07-15 11:41:29.180306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.691 [2024-07-15 11:41:29.180318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.691 [2024-07-15 11:41:29.180323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.691 [2024-07-15 11:41:29.180327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.691 [2024-07-15 11:41:29.180338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.190209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.190276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.190288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.190293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.190297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.190308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.200272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.200344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.200356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.200361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.200365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.200376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.210293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.210355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.210366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.210374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.210378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.210389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.220323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.220383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.220395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.220399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.220403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.220414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.230366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.230430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.230441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.230446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.230450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.230461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.240350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.240423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.240434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.240439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.240443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.240453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.250525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.250589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.250601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.250606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.250610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.250621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.260430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.260529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.260541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.260545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.260550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.260560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.270449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.270516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.270528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.270533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.270537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.270547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.280461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.280533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.280544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.280549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.280553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.280564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.692 [2024-07-15 11:41:29.290500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.692 [2024-07-15 11:41:29.290594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.692 [2024-07-15 11:41:29.290606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.692 [2024-07-15 11:41:29.290611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.692 [2024-07-15 11:41:29.290615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.692 [2024-07-15 11:41:29.290625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.692 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.300451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.300517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.300532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.300537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.300541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.300552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.310441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.310505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.310517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.310522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.310526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.310536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.320581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.320649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.320660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.320665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.320669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.320680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.330602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.330668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.330680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.330684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.330689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.330699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.340665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.340726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.340741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.340746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.340750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.340765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.350673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.350740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.350752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.350757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.350761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.350772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.360734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.360806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.360818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.360823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.360828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.360838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.370672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.370754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.370767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.370772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.370779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.370790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.380729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.380789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.380801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.380806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.380810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.380822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.693 [2024-07-15 11:41:29.390731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.693 [2024-07-15 11:41:29.390797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.693 [2024-07-15 11:41:29.390814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.693 [2024-07-15 11:41:29.390819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.693 [2024-07-15 11:41:29.390823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.693 [2024-07-15 11:41:29.390834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.693 qpair failed and we were unable to recover it. 00:30:00.956 [2024-07-15 11:41:29.400798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.956 [2024-07-15 11:41:29.400867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.956 [2024-07-15 11:41:29.400879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.956 [2024-07-15 11:41:29.400884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.956 [2024-07-15 11:41:29.400889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.956 [2024-07-15 11:41:29.400899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.956 qpair failed and we were unable to recover it. 00:30:00.956 [2024-07-15 11:41:29.410934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.956 [2024-07-15 11:41:29.411034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.956 [2024-07-15 11:41:29.411046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.956 [2024-07-15 11:41:29.411051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.956 [2024-07-15 11:41:29.411055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.956 [2024-07-15 11:41:29.411066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.956 qpair failed and we were unable to recover it. 00:30:00.956 [2024-07-15 11:41:29.420848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.420953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.420965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.420969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.420974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.420984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.430844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.430910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.430922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.430927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.430934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.430945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.440898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.440965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.440976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.440981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.440986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.440996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.450923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.450986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.450999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.451003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.451008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.451018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.460953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.461016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.461027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.461032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.461036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.461047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.471020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.471119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.471135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.471139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.471143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.471155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.481002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.481075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.481088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.481093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.481097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.481108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.491065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.491157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.491169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.491174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.491178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.491188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.501054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.501117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.501132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.501137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.501141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.501152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.511107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.511178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.511190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.511194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.511199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.511209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.521012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.521079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.521090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.521095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.521102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.521113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.531164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.531228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.531240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.531245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.531249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.531259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.541177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.541243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.541255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.541260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.541264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.541274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.551229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.551296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.957 [2024-07-15 11:41:29.551308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.957 [2024-07-15 11:41:29.551312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.957 [2024-07-15 11:41:29.551316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.957 [2024-07-15 11:41:29.551327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.957 qpair failed and we were unable to recover it. 00:30:00.957 [2024-07-15 11:41:29.561233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.957 [2024-07-15 11:41:29.561298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.561310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.561314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.561318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.561329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.571280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.571347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.571358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.571363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.571367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.571378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.581292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.581354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.581365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.581370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.581374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.581385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.591363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.591428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.591440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.591445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.591449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.591460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.601342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.601414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.601426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.601431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.601435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.601446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.611251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.611311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.611323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.611330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.611334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.611345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.621396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.621456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.621468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.621472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.621477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.621487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.631420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.631483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.631494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.631499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.631503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.631514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.641424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.641497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.641508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.641513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.641518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.641528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:00.958 [2024-07-15 11:41:29.651462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.958 [2024-07-15 11:41:29.651530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.958 [2024-07-15 11:41:29.651542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.958 [2024-07-15 11:41:29.651547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.958 [2024-07-15 11:41:29.651551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:00.958 [2024-07-15 11:41:29.651562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.958 qpair failed and we were unable to recover it. 00:30:01.219 [2024-07-15 11:41:29.661490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.219 [2024-07-15 11:41:29.661555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.661567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.661571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.661576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.661586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.671536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.671605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.671616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.671621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.671626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.671636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.681512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.681588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.681599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.681604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.681609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.681619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.691584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.691645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.691657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.691662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.691666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.691676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.701624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.701687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.701702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.701707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.701711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.701722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.711566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.711632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.711644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.711648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.711652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.711663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.721681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.721752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.721764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.721769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.721773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.721784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.731698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.731761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.731780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.731786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.731790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.731804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.741695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.741762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.741780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.741786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.741791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.741808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.751815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.751909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.751922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.751927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.751931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.751942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.761809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.761883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.761901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.761907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.761911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.761925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.771859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.771929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.771948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.771954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.771958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.771972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.781877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.781940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.781953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.781958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.781962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.781974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.791875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.791936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.791951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.220 [2024-07-15 11:41:29.791957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.220 [2024-07-15 11:41:29.791961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.220 [2024-07-15 11:41:29.791972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.220 qpair failed and we were unable to recover it. 00:30:01.220 [2024-07-15 11:41:29.801935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.220 [2024-07-15 11:41:29.802002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.220 [2024-07-15 11:41:29.802014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.802019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.802023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.802034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.811921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.811988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.812000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.812005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.812009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.812020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.821805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.821865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.821877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.821881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.821886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.821896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.832006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.832074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.832085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.832090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.832094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.832108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.842022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.842111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.842126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.842132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.842136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.842147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.852035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.852102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.852113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.852118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.852125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.852136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.862046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.862103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.862114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.862119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.862128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.862138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.871986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.872056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.872068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.872073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.872077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.872087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.882131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.882207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.882219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.882224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.882228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.882239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.892161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.892226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.892238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.892243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.892247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.892258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.902260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.902319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.902331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.902336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.902340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.902351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.221 [2024-07-15 11:41:29.912253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.221 [2024-07-15 11:41:29.912356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.221 [2024-07-15 11:41:29.912367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.221 [2024-07-15 11:41:29.912372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.221 [2024-07-15 11:41:29.912377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.221 [2024-07-15 11:41:29.912388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.221 qpair failed and we were unable to recover it. 00:30:01.490 [2024-07-15 11:41:29.922221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.490 [2024-07-15 11:41:29.922289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.490 [2024-07-15 11:41:29.922301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.490 [2024-07-15 11:41:29.922306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.490 [2024-07-15 11:41:29.922313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.490 [2024-07-15 11:41:29.922324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.490 qpair failed and we were unable to recover it. 00:30:01.490 [2024-07-15 11:41:29.932346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.490 [2024-07-15 11:41:29.932412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.490 [2024-07-15 11:41:29.932424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.490 [2024-07-15 11:41:29.932429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.490 [2024-07-15 11:41:29.932433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.490 [2024-07-15 11:41:29.932443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.490 qpair failed and we were unable to recover it. 00:30:01.490 [2024-07-15 11:41:29.942189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.490 [2024-07-15 11:41:29.942247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.490 [2024-07-15 11:41:29.942259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.490 [2024-07-15 11:41:29.942264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.490 [2024-07-15 11:41:29.942268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.490 [2024-07-15 11:41:29.942279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.490 qpair failed and we were unable to recover it. 00:30:01.490 [2024-07-15 11:41:29.952255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.490 [2024-07-15 11:41:29.952320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.490 [2024-07-15 11:41:29.952332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.490 [2024-07-15 11:41:29.952336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.490 [2024-07-15 11:41:29.952340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.490 [2024-07-15 11:41:29.952351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.490 qpair failed and we were unable to recover it. 00:30:01.490 [2024-07-15 11:41:29.962227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:29.962294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:29.962306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:29.962311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:29.962315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:29.962326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:29.972381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:29.972446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:29.972458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:29.972463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:29.972467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:29.972478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:29.982335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:29.982397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:29.982409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:29.982414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:29.982418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:29.982429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:29.992425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:29.992490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:29.992501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:29.992506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:29.992510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:29.992520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.002372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.002442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.002454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.002459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.002463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.002474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.012511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.012572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.012585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.012592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.012597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.012608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.022903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.023105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.023128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.023138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.023145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.023164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.032571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.032665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.032685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.032695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.032703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.032722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.042613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.042714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.042736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.042745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.042752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.042771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.052625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.052712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.052728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.052734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.052739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.052753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.062576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.062638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.062650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.062656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.062660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.062672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.491 [2024-07-15 11:41:30.072622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.491 [2024-07-15 11:41:30.072686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.491 [2024-07-15 11:41:30.072698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.491 [2024-07-15 11:41:30.072703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.491 [2024-07-15 11:41:30.072707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.491 [2024-07-15 11:41:30.072719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.491 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.082693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.082786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.082805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.082813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.082820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.082838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.092675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.092747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.092760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.092765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.092770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.092782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.102666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.102731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.102754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.102760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.102764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.102778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.112760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.112873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.112892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.112898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.112903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.112916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.122802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.122877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.122895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.122901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.122906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.122921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.132719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.132777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.132790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.132795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.132799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.132811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.142785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.142847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.142865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.142872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.142876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.142896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.152816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.152885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.152898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.152903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.152908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.152919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.162876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.162944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.162955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.162960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.162965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.162976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.172813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.172868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.172880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.172885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.172889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.172900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.492 qpair failed and we were unable to recover it. 00:30:01.492 [2024-07-15 11:41:30.182808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.492 [2024-07-15 11:41:30.182913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.492 [2024-07-15 11:41:30.182931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.492 [2024-07-15 11:41:30.182937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.492 [2024-07-15 11:41:30.182942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.492 [2024-07-15 11:41:30.182956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.493 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.192946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.193011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.193027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.193033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.193037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.193049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.202957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.203024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.203036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.203041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.203045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.203057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.212960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.213020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.213031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.213036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.213041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.213052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.223017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.223072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.223084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.223089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.223093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.223104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.232941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.233008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.233019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.233024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.233028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.233042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.243026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.243096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.243108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.243113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.243117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.243131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.252945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.253003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.253014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.253019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.253023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.253034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.263124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.263226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.263238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.263243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.263247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.263258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.273171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.273233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.273245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.273250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.273254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.273265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.283084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.283155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.283170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.283175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.283179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.283190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.293184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.293244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.755 [2024-07-15 11:41:30.293256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.755 [2024-07-15 11:41:30.293261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.755 [2024-07-15 11:41:30.293265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.755 [2024-07-15 11:41:30.293275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.755 qpair failed and we were unable to recover it. 00:30:01.755 [2024-07-15 11:41:30.303211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.755 [2024-07-15 11:41:30.303273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.303284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.303289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.303293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.303304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.313264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.313329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.313340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.313345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.313349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.313360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.323351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.323427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.323439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.323444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.323452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.323464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.333207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.333293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.333304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.333309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.333314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.333324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.343332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.343391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.343402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.343407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.343411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.343422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.353422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.353484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.353496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.353501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.353506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.353516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.363459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.363548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.363559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.363564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.363568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.363579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.373440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.373504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.373515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.373520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.373524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.373535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.383447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.383508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.383520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.383525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.383529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.383539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.393532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.393594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.393605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.393610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.393614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.393625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.403544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.403611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.403622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.403627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.403631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.403642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.413577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.413636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.413647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.413655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.413659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.413670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.423569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.423659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.423671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.423675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.423680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.423690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.433707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.433797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.433809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.433813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.433818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.756 [2024-07-15 11:41:30.433828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.756 qpair failed and we were unable to recover it. 00:30:01.756 [2024-07-15 11:41:30.443601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.756 [2024-07-15 11:41:30.443677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.756 [2024-07-15 11:41:30.443689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.756 [2024-07-15 11:41:30.443694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.756 [2024-07-15 11:41:30.443698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.757 [2024-07-15 11:41:30.443709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.757 qpair failed and we were unable to recover it. 00:30:01.757 [2024-07-15 11:41:30.453733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.757 [2024-07-15 11:41:30.453792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.757 [2024-07-15 11:41:30.453803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.757 [2024-07-15 11:41:30.453808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.757 [2024-07-15 11:41:30.453812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:01.757 [2024-07-15 11:41:30.453822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.757 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.463659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.463720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.463738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.463744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.463749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.463764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.473724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.473794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.473812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.473818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.473823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.473837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.483684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.483803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.483821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.483827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.483832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.483846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.493782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.493848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.493867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.493873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.493877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.493892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.503813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.503899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.503917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.503927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.503932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.503946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.513822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.513888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.513902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.513907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.513911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.513923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.523825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.523893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.523906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.523910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.523916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.523927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.533724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.533784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.533796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.533801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.533805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.533816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.543882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.543941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.543953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.543958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.543962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.543973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.553973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.554082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.554094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.554099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.554105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.554118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.563922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.563983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.563995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.564000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.564004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.564015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.573945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.574007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.574019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.574024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.574028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.574038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.583997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.584057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.584069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.584074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.018 [2024-07-15 11:41:30.584078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.018 [2024-07-15 11:41:30.584088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.018 qpair failed and we were unable to recover it. 00:30:02.018 [2024-07-15 11:41:30.594060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.018 [2024-07-15 11:41:30.594125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.018 [2024-07-15 11:41:30.594140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.018 [2024-07-15 11:41:30.594145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.594149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.594160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.604059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.604164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.604175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.604180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.604184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.604195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.614086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.614146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.614157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.614162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.614167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.614177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.624089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.624148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.624159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.624164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.624168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.624179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.634170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.634235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.634247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.634252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.634256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.634270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.644157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.644221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.644233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.644238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.644242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.644252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.654179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.654236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.654248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.654253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.654257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.654267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.664209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.664269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.664281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.664285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.664289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.664300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.674310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.674395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.674407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.674411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.674416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.674426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.684267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.684331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.684345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.684350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.684354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.684365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.694207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.694298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.694311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.694316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.694322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.694333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.704286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.704343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.704355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.704359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.704363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.704374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.019 [2024-07-15 11:41:30.714375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.019 [2024-07-15 11:41:30.714439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.019 [2024-07-15 11:41:30.714451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.019 [2024-07-15 11:41:30.714456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.019 [2024-07-15 11:41:30.714460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.019 [2024-07-15 11:41:30.714471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.019 qpair failed and we were unable to recover it. 00:30:02.281 [2024-07-15 11:41:30.724356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.281 [2024-07-15 11:41:30.724434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.281 [2024-07-15 11:41:30.724445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.281 [2024-07-15 11:41:30.724450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.281 [2024-07-15 11:41:30.724457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.281 [2024-07-15 11:41:30.724468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.281 qpair failed and we were unable to recover it. 00:30:02.281 [2024-07-15 11:41:30.734415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.281 [2024-07-15 11:41:30.734468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.281 [2024-07-15 11:41:30.734480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.281 [2024-07-15 11:41:30.734485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.281 [2024-07-15 11:41:30.734489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.281 [2024-07-15 11:41:30.734499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.281 qpair failed and we were unable to recover it. 00:30:02.281 [2024-07-15 11:41:30.744413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.281 [2024-07-15 11:41:30.744470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.281 [2024-07-15 11:41:30.744482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.281 [2024-07-15 11:41:30.744487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.281 [2024-07-15 11:41:30.744491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.281 [2024-07-15 11:41:30.744501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.281 qpair failed and we were unable to recover it. 00:30:02.281 [2024-07-15 11:41:30.754493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.281 [2024-07-15 11:41:30.754557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.281 [2024-07-15 11:41:30.754570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.281 [2024-07-15 11:41:30.754575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.281 [2024-07-15 11:41:30.754579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.281 [2024-07-15 11:41:30.754590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.281 qpair failed and we were unable to recover it. 00:30:02.281 [2024-07-15 11:41:30.764494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.281 [2024-07-15 11:41:30.764558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.281 [2024-07-15 11:41:30.764569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.281 [2024-07-15 11:41:30.764574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.281 [2024-07-15 11:41:30.764578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.281 [2024-07-15 11:41:30.764589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.281 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.774513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.774580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.774591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.774596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.774600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.774611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.784552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.784613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.784625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.784630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.784634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.784645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.794553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.794615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.794626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.794631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.794635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.794646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.804466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.804576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.804588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.804595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.804599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.804610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.814623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.814679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.814691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.814698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.814703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.814714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.824617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.824701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.824713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.824718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.824722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.824732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.834708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.834778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.834790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.834794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.834798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.834809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.844692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.844758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.844769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.844774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.844778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.844789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.854827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.854886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.854897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.854902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.854907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.854917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.864741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.864800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.864812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.864817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.864821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.864831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.874821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.874886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.874898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.874903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.874907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.874918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.884786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.884850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.884862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.884867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.884871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.884881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.894820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.894881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.894894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.894899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.894903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.894914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.904962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.282 [2024-07-15 11:41:30.905020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.282 [2024-07-15 11:41:30.905032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.282 [2024-07-15 11:41:30.905042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.282 [2024-07-15 11:41:30.905047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.282 [2024-07-15 11:41:30.905058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.282 qpair failed and we were unable to recover it. 00:30:02.282 [2024-07-15 11:41:30.914921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.914987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.914998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.915003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.915007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.915018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.283 [2024-07-15 11:41:30.924893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.924954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.924966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.924971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.924975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.924986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.283 [2024-07-15 11:41:30.934838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.934895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.934906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.934911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.934915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.934926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.283 [2024-07-15 11:41:30.944978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.945036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.945047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.945052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.945056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.945067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.283 [2024-07-15 11:41:30.955018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.955085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.955097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.955101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.955105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.955116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.283 [2024-07-15 11:41:30.965016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.965081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.965093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.965098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.965102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.965113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.283 [2024-07-15 11:41:30.975023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.283 [2024-07-15 11:41:30.975085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.283 [2024-07-15 11:41:30.975096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.283 [2024-07-15 11:41:30.975101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.283 [2024-07-15 11:41:30.975106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.283 [2024-07-15 11:41:30.975117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.283 qpair failed and we were unable to recover it. 00:30:02.544 [2024-07-15 11:41:30.985072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.544 [2024-07-15 11:41:30.985135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.544 [2024-07-15 11:41:30.985147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.544 [2024-07-15 11:41:30.985152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.544 [2024-07-15 11:41:30.985156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.544 [2024-07-15 11:41:30.985167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.544 qpair failed and we were unable to recover it. 00:30:02.544 [2024-07-15 11:41:30.995130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.544 [2024-07-15 11:41:30.995193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.544 [2024-07-15 11:41:30.995208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.544 [2024-07-15 11:41:30.995213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.544 [2024-07-15 11:41:30.995217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.544 [2024-07-15 11:41:30.995228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.544 qpair failed and we were unable to recover it. 00:30:02.544 [2024-07-15 11:41:31.005125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.544 [2024-07-15 11:41:31.005190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.544 [2024-07-15 11:41:31.005202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.544 [2024-07-15 11:41:31.005207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.544 [2024-07-15 11:41:31.005211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.005222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.015159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.015219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.015230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.015235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.015239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.015251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.025179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.025240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.025252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.025256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.025260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.025271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.035301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.035394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.035405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.035410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.035414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.035428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.045162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.045378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.045390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.045395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.045399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.045410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.055275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.055340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.055352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.055359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.055364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.055376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.065297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.065356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.065367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.065372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.065376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.065386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.075367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.075434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.075445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.075450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.075454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.075464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.085375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.085529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.085545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.085550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.085554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.085565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.095383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.095447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.095459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.095463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.095467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.095478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.105398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.105460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.105472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.105477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.105481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.105492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.115459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.115525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.115537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.115542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.115546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.115557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.125611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.125674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.125686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.125691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.125698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.125709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.135478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.135537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.135548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.135553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.135557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.135568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.545 qpair failed and we were unable to recover it. 00:30:02.545 [2024-07-15 11:41:31.145525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.545 [2024-07-15 11:41:31.145586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.545 [2024-07-15 11:41:31.145598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.545 [2024-07-15 11:41:31.145603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.545 [2024-07-15 11:41:31.145607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.545 [2024-07-15 11:41:31.145617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.155581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.155645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.155656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.155661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.155665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.155676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.165559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.165624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.165636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.165641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.165645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.165655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.175657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.175765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.175777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.175782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.175786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.175797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.185609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.185676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.185695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.185701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.185705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.185719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.195586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.195655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.195668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.195673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.195677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.195689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.205656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.205718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.205731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.205736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.205740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.205751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.215684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.215745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.215757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.215762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.215769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.215780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.225718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.225775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.225788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.225793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.225797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.225811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.546 [2024-07-15 11:41:31.235792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.546 [2024-07-15 11:41:31.235859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.546 [2024-07-15 11:41:31.235872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.546 [2024-07-15 11:41:31.235876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.546 [2024-07-15 11:41:31.235881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.546 [2024-07-15 11:41:31.235892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.546 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.245661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.245730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.245749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.245755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.245760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.245774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.255792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.255860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.255878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.255884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.255889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.255903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.265832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.265892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.265911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.265917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.265922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.265936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.275924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.276017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.276035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.276041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.276046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.276060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.285887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.285949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.285962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.285967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.285971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.285983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.295920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.295980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.295992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.295997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.296001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.296012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.305962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.306023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.306035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.306043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.306047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.306058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.316009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.316076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.316088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.316093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.316097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.316109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.326054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.326167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.326181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.326187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.326192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.326203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.335962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.336022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.336034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.336039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.336043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.336054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.346055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.808 [2024-07-15 11:41:31.346113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.808 [2024-07-15 11:41:31.346129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.808 [2024-07-15 11:41:31.346134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.808 [2024-07-15 11:41:31.346138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.808 [2024-07-15 11:41:31.346149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.808 qpair failed and we were unable to recover it. 00:30:02.808 [2024-07-15 11:41:31.356120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.356190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.356202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.356207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.356211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.356222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.366101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.366170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.366182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.366186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.366190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.366201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.376117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.376181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.376193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.376198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.376202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.376213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.386148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.386221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.386233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.386238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.386242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.386253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.396233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.396301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.396315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.396320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.396324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.396335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.406236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.406301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.406313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.406318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.406322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.406333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.416261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.416366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.416377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.416382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.416386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.416397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.426384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.426441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.426453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.426458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.426462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.426473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.436224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.436288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.436300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.436305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.436310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.436324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.446320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.446384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.446396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.446401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.446406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.446417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.456343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.456402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.456413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.456418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.456422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.456433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.466387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.466449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.466461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.466466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.466471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.466481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.476465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.476528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.476540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.476545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.476549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.476560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.486473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.486562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.486577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.809 [2024-07-15 11:41:31.486582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.809 [2024-07-15 11:41:31.486586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.809 [2024-07-15 11:41:31.486596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.809 qpair failed and we were unable to recover it. 00:30:02.809 [2024-07-15 11:41:31.496456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.809 [2024-07-15 11:41:31.496564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.809 [2024-07-15 11:41:31.496575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.810 [2024-07-15 11:41:31.496580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.810 [2024-07-15 11:41:31.496584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.810 [2024-07-15 11:41:31.496594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.810 qpair failed and we were unable to recover it. 00:30:02.810 [2024-07-15 11:41:31.506507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.810 [2024-07-15 11:41:31.506657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.810 [2024-07-15 11:41:31.506669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.810 [2024-07-15 11:41:31.506673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.810 [2024-07-15 11:41:31.506678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:02.810 [2024-07-15 11:41:31.506688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.810 qpair failed and we were unable to recover it. 00:30:03.071 [2024-07-15 11:41:31.516568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.071 [2024-07-15 11:41:31.516640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.071 [2024-07-15 11:41:31.516652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.071 [2024-07-15 11:41:31.516657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.071 [2024-07-15 11:41:31.516661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.071 [2024-07-15 11:41:31.516672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.071 qpair failed and we were unable to recover it. 00:30:03.071 [2024-07-15 11:41:31.526535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.071 [2024-07-15 11:41:31.526597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.071 [2024-07-15 11:41:31.526610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.071 [2024-07-15 11:41:31.526615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.071 [2024-07-15 11:41:31.526619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.071 [2024-07-15 11:41:31.526632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.071 qpair failed and we were unable to recover it. 00:30:03.071 [2024-07-15 11:41:31.536573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.071 [2024-07-15 11:41:31.536629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.071 [2024-07-15 11:41:31.536640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.071 [2024-07-15 11:41:31.536645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.071 [2024-07-15 11:41:31.536650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.071 [2024-07-15 11:41:31.536661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.071 qpair failed and we were unable to recover it. 00:30:03.071 [2024-07-15 11:41:31.546616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.071 [2024-07-15 11:41:31.546675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.071 [2024-07-15 11:41:31.546687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.071 [2024-07-15 11:41:31.546692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.546696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.546707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.556672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.556744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.556762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.556768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.556772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.556787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.566662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.566733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.566752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.566758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.566763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.566777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.576707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.576794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.576813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.576819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.576823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.576838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.586712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.586824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.586837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.586843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.586847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.586858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.596771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.596844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.596862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.596868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.596872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.596886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.606765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.606834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.606853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.606859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.606863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.606877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.616783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.616842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.616854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.616859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.616867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.616878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.626825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.626924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.626936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.626941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.626945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.626956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.636856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.636926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.636937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.636942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.636947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.636958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.646842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.646906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.646925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.646931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.646935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.646949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.656884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.656940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.656954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.656959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.656963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.656974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.666907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.666964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.666976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.666981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.666985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.666996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.676988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.677078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.677097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.677103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.677108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.677134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.072 [2024-07-15 11:41:31.686849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.072 [2024-07-15 11:41:31.686917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.072 [2024-07-15 11:41:31.686930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.072 [2024-07-15 11:41:31.686936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.072 [2024-07-15 11:41:31.686940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.072 [2024-07-15 11:41:31.686951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.072 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.697011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.697073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.697085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.697090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.697094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.697105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.707022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.707080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.707092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.707101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.707105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.707117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.717084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.717151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.717164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.717169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.717175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.717187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.727040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.727112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.727127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.727132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.727136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.727147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.737112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.737175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.737187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.737192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.737196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.737206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.747127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.747186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.747198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.747203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.747207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.747217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.757242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.757351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.757363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.757368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.757372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.757382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.073 [2024-07-15 11:41:31.767217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.073 [2024-07-15 11:41:31.767310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.073 [2024-07-15 11:41:31.767321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.073 [2024-07-15 11:41:31.767326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.073 [2024-07-15 11:41:31.767330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.073 [2024-07-15 11:41:31.767341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.073 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.777212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.777363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.777375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.777380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.777384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.777394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.787231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.787290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.787302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.787307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.787311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.787322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.797318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.797382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.797397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.797402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.797406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.797417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.807265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.807331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.807343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.807348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.807352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.807363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.817322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.817422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.817433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.817438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.817443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.817453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.827372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.827427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.827439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.827444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.827448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.827458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.837437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.837503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.837514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.837519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.837523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.347 [2024-07-15 11:41:31.837537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.347 qpair failed and we were unable to recover it. 00:30:03.347 [2024-07-15 11:41:31.847413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.347 [2024-07-15 11:41:31.847476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.347 [2024-07-15 11:41:31.847487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.347 [2024-07-15 11:41:31.847492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.347 [2024-07-15 11:41:31.847497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.847508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.857399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.857456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.857468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.857473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.857477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.857487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.867439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.867495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.867506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.867511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.867515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.867526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.877581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.877671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.877683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.877688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.877692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.877703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.887544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.887612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.887626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.887631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.887635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.887646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.897523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.897579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.897590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.897595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.897599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.897610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.907575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.907630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.907641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.907646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.907650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.907661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.917651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.917717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.917729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.917734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.917738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.917749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.927640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.927706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.927717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.927722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.927727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.927740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.937627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.937729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.937740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.937745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.937750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.937760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.947667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.947726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.947737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.947742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.947746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.947757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.957708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.957772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.957785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.957790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.957794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.957805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.967623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.967687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.967698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.967703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.967707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.967718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.977727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.977791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.977806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.977811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.977815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.977826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.987645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.987710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.987722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.987727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.987731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.987741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:31.997826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:31.997890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:31.997903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:31.997909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:31.997913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:31.997924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:32.007819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:32.007887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:32.007906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:32.007912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:32.007916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:32.007930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:32.017805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:32.017870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:32.017889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:32.017895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:32.017903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:32.017917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:32.027883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:32.027984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:32.028003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:32.028009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:32.028013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:32.028027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.348 [2024-07-15 11:41:32.037958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.348 [2024-07-15 11:41:32.038073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.348 [2024-07-15 11:41:32.038086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.348 [2024-07-15 11:41:32.038091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.348 [2024-07-15 11:41:32.038095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.348 [2024-07-15 11:41:32.038106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.348 qpair failed and we were unable to recover it. 00:30:03.349 [2024-07-15 11:41:32.047932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.609 [2024-07-15 11:41:32.047994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.609 [2024-07-15 11:41:32.048007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.609 [2024-07-15 11:41:32.048011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.609 [2024-07-15 11:41:32.048016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.609 [2024-07-15 11:41:32.048028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.609 qpair failed and we were unable to recover it. 00:30:03.609 [2024-07-15 11:41:32.057963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.609 [2024-07-15 11:41:32.058021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.058033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.058038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.058042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.058053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.067984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.068078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.068090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.068095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.068099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.068109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.078045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.078113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.078129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.078134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.078138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.078149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.088050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.088118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.088132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.088137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.088142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.088152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.098072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.098132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.098144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.098148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.098153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.098163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.108005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.108063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.108075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.108084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.108088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.108099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.118175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.118238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.118249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.118254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.118258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.118269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.128247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.128322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.128333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.128338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.128342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.128353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.138048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.138156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.138167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.138172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.138176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.610 [2024-07-15 11:41:32.138187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.610 qpair failed and we were unable to recover it. 00:30:03.610 [2024-07-15 11:41:32.148200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.610 [2024-07-15 11:41:32.148308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.610 [2024-07-15 11:41:32.148319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.610 [2024-07-15 11:41:32.148324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.610 [2024-07-15 11:41:32.148328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.148339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.158259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.158322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.158333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.158338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.158342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.158353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.168245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.168315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.168328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.168333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.168340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.168352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.178293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.178348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.178361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.178366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.178370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.178381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.188326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.188385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.188397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.188401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.188406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.188416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.198340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.198403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.198415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.198422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.198427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.198437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.208428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.208495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.208507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.208512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.208516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.208526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.218386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.218448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.218459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.218464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.218469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.218479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.228413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.228472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.228484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.228489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.228493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.228504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.238471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.238539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.238552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.238557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.238563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.611 [2024-07-15 11:41:32.238575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.611 qpair failed and we were unable to recover it. 00:30:03.611 [2024-07-15 11:41:32.248461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.611 [2024-07-15 11:41:32.248522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.611 [2024-07-15 11:41:32.248535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.611 [2024-07-15 11:41:32.248539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.611 [2024-07-15 11:41:32.248544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.248554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.612 [2024-07-15 11:41:32.258476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.612 [2024-07-15 11:41:32.258531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.612 [2024-07-15 11:41:32.258542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.612 [2024-07-15 11:41:32.258547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.612 [2024-07-15 11:41:32.258551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.258562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.612 [2024-07-15 11:41:32.268523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.612 [2024-07-15 11:41:32.268577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.612 [2024-07-15 11:41:32.268588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.612 [2024-07-15 11:41:32.268593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.612 [2024-07-15 11:41:32.268598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.268608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.612 [2024-07-15 11:41:32.278600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.612 [2024-07-15 11:41:32.278661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.612 [2024-07-15 11:41:32.278672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.612 [2024-07-15 11:41:32.278677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.612 [2024-07-15 11:41:32.278681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.278692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.612 [2024-07-15 11:41:32.288573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.612 [2024-07-15 11:41:32.288642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.612 [2024-07-15 11:41:32.288656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.612 [2024-07-15 11:41:32.288661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.612 [2024-07-15 11:41:32.288665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.288676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.612 [2024-07-15 11:41:32.298469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.612 [2024-07-15 11:41:32.298529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.612 [2024-07-15 11:41:32.298541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.612 [2024-07-15 11:41:32.298546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.612 [2024-07-15 11:41:32.298550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.298560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.612 [2024-07-15 11:41:32.308611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.612 [2024-07-15 11:41:32.308667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.612 [2024-07-15 11:41:32.308679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.612 [2024-07-15 11:41:32.308684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.612 [2024-07-15 11:41:32.308688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.612 [2024-07-15 11:41:32.308699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.612 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.318673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.318740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.318752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.318757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.318761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.318772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.328701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.328805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.328816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.328821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.328825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.328839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.338711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.338771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.338783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.338788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.338792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.338802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.348745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.348802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.348813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.348818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.348822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.348833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.358754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.358818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.358830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.358835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.358839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.358849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.368739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.368806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.368818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.368822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.368827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.368837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.378805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.873 [2024-07-15 11:41:32.378865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.873 [2024-07-15 11:41:32.378883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.873 [2024-07-15 11:41:32.378888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.873 [2024-07-15 11:41:32.378892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.873 [2024-07-15 11:41:32.378903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.873 qpair failed and we were unable to recover it. 00:30:03.873 [2024-07-15 11:41:32.388834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.388894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.388912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.388918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.388922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.388937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.398950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.399016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.399029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.399034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.399038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.399049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.408882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.408954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.408966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.408970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.408975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.408985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.418963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.419024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.419036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.419040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.419048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.419059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.428963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.429022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.429034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.429039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.429043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.429053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.439064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.439166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.439178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.439183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.439187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.439198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.449046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.449112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.449126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.449131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.449135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.449146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.459081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.459157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.459169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.459174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.459178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.459188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.469050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.469111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.469125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.469130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.469134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.469145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.479128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.479210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.479221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.479226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.479231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.479242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.489110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.489178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.489190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.489195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.489199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.489210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.499133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.499203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.499214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.499219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.499225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.499236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.509216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.509328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.509340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.509348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.509353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.509363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.519236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.519303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.874 [2024-07-15 11:41:32.519315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.874 [2024-07-15 11:41:32.519320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.874 [2024-07-15 11:41:32.519324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.874 [2024-07-15 11:41:32.519334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.874 qpair failed and we were unable to recover it. 00:30:03.874 [2024-07-15 11:41:32.529204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.874 [2024-07-15 11:41:32.529277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.875 [2024-07-15 11:41:32.529289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.875 [2024-07-15 11:41:32.529294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.875 [2024-07-15 11:41:32.529298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.875 [2024-07-15 11:41:32.529309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-07-15 11:41:32.539266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.875 [2024-07-15 11:41:32.539358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.875 [2024-07-15 11:41:32.539370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.875 [2024-07-15 11:41:32.539375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.875 [2024-07-15 11:41:32.539379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.875 [2024-07-15 11:41:32.539390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-07-15 11:41:32.549397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.875 [2024-07-15 11:41:32.549455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.875 [2024-07-15 11:41:32.549467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.875 [2024-07-15 11:41:32.549472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.875 [2024-07-15 11:41:32.549476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.875 [2024-07-15 11:41:32.549487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-07-15 11:41:32.559378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.875 [2024-07-15 11:41:32.559441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.875 [2024-07-15 11:41:32.559452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.875 [2024-07-15 11:41:32.559457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.875 [2024-07-15 11:41:32.559461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.875 [2024-07-15 11:41:32.559472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.875 qpair failed and we were unable to recover it. 00:30:03.875 [2024-07-15 11:41:32.569349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.875 [2024-07-15 11:41:32.569415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.875 [2024-07-15 11:41:32.569427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.875 [2024-07-15 11:41:32.569432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.875 [2024-07-15 11:41:32.569436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:03.875 [2024-07-15 11:41:32.569446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.875 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.579354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.579416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.579428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.579433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.579438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.579449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.589401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.589463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.589475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.589480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.589484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.589495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.599453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.599519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.599531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.599538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.599543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.599553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.609466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.609534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.609546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.609551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.609555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.609566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.619394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.619451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.619463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.619468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.619473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.619484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.629480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.629541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.629552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.629557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.629561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.629572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.639587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.639669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.639681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.639686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.639691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.639702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.649555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.649616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.649627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.649632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.649636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.649647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.659593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.659652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.659664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.659670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.659675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.659686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.136 [2024-07-15 11:41:32.669605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.136 [2024-07-15 11:41:32.669666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.136 [2024-07-15 11:41:32.669678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.136 [2024-07-15 11:41:32.669683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.136 [2024-07-15 11:41:32.669687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.136 [2024-07-15 11:41:32.669698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.136 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.679688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.679753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.679765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.679770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.679774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.679784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.689730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.689798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.689813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.689818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.689823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.689834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.699695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.699757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.699776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.699782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.699787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.699800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.709759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.709818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.709831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.709836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.709841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.709852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.719816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.719890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.719902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.719907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.719911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.719922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.729781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.729844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.729856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.729861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.729865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.729879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.739812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.739872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.739884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.739889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.739893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.739904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.749831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.749891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.749903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.749908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.749912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.749923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.759908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.759971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.759983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.759988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.759992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.760003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.769927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.769997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.770008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.770013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.770017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.770028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.779915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.779974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.779989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.779994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.779998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.780008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.789925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.789984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.789995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.790000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.790004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.790015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.800008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.800073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.800085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.800090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.800094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.800104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.809946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.137 [2024-07-15 11:41:32.810020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.137 [2024-07-15 11:41:32.810032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.137 [2024-07-15 11:41:32.810037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.137 [2024-07-15 11:41:32.810041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.137 [2024-07-15 11:41:32.810051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.137 qpair failed and we were unable to recover it. 00:30:04.137 [2024-07-15 11:41:32.820002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.138 [2024-07-15 11:41:32.820070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.138 [2024-07-15 11:41:32.820082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.138 [2024-07-15 11:41:32.820086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.138 [2024-07-15 11:41:32.820093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.138 [2024-07-15 11:41:32.820104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.138 qpair failed and we were unable to recover it. 00:30:04.138 [2024-07-15 11:41:32.830039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.138 [2024-07-15 11:41:32.830097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.138 [2024-07-15 11:41:32.830109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.138 [2024-07-15 11:41:32.830114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.138 [2024-07-15 11:41:32.830118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.138 [2024-07-15 11:41:32.830132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.138 qpair failed and we were unable to recover it. 00:30:04.399 [2024-07-15 11:41:32.840110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.399 [2024-07-15 11:41:32.840180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.399 [2024-07-15 11:41:32.840192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.399 [2024-07-15 11:41:32.840197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.399 [2024-07-15 11:41:32.840201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.399 [2024-07-15 11:41:32.840212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-07-15 11:41:32.850105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.399 [2024-07-15 11:41:32.850170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.399 [2024-07-15 11:41:32.850182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.399 [2024-07-15 11:41:32.850187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.399 [2024-07-15 11:41:32.850191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.399 [2024-07-15 11:41:32.850202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-07-15 11:41:32.860192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.399 [2024-07-15 11:41:32.860262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.399 [2024-07-15 11:41:32.860274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.399 [2024-07-15 11:41:32.860279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.399 [2024-07-15 11:41:32.860283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.399 [2024-07-15 11:41:32.860294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-07-15 11:41:32.870149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.399 [2024-07-15 11:41:32.870213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.399 [2024-07-15 11:41:32.870225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.399 [2024-07-15 11:41:32.870230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.399 [2024-07-15 11:41:32.870234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.399 [2024-07-15 11:41:32.870245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-07-15 11:41:32.880180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.399 [2024-07-15 11:41:32.880246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.399 [2024-07-15 11:41:32.880258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.399 [2024-07-15 11:41:32.880263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.399 [2024-07-15 11:41:32.880267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.399 [2024-07-15 11:41:32.880278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.399 qpair failed and we were unable to recover it. 00:30:04.399 [2024-07-15 11:41:32.890200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.399 [2024-07-15 11:41:32.890262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.399 [2024-07-15 11:41:32.890274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.399 [2024-07-15 11:41:32.890279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.399 [2024-07-15 11:41:32.890283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.400 [2024-07-15 11:41:32.890294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.900258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.900317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.900328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.900333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.900338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb0000b90 00:30:04.400 [2024-07-15 11:41:32.900348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.910382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.910560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.910623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.910648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.910680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fa8000b90 00:30:04.400 [2024-07-15 11:41:32.910733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.920372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.920528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.920579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.920599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.920616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fa8000b90 00:30:04.400 [2024-07-15 11:41:32.920658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Write completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 Read completed with error (sct=0, sc=8) 00:30:04.400 starting I/O failed 00:30:04.400 [2024-07-15 11:41:32.921533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:04.400 [2024-07-15 11:41:32.930412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.930595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.930644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.930666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.930685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb8000b90 00:30:04.400 [2024-07-15 11:41:32.930744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.940446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.940594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.940646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.940665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.940681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7fb8000b90 00:30:04.400 [2024-07-15 11:41:32.940724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.950365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.950444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.950470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.950478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.950486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6bf220 00:30:04.400 [2024-07-15 11:41:32.950505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.960430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:04.400 [2024-07-15 11:41:32.960512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:04.400 [2024-07-15 11:41:32.960529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:04.400 [2024-07-15 11:41:32.960537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:04.400 [2024-07-15 11:41:32.960543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6bf220 00:30:04.400 [2024-07-15 11:41:32.960558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:04.400 qpair failed and we were unable to recover it. 00:30:04.400 [2024-07-15 11:41:32.960718] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:04.400 A controller has encountered a failure and is being reset. 00:30:04.400 [2024-07-15 11:41:32.960836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ccf20 (9): Bad file descriptor 00:30:04.400 Controller properly reset. 00:30:04.400 Initializing NVMe Controllers 00:30:04.400 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:04.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:04.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:04.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:04.400 Initialization complete. Launching workers. 00:30:04.400 Starting thread on core 1 00:30:04.400 Starting thread on core 2 00:30:04.400 Starting thread on core 3 00:30:04.400 Starting thread on core 0 00:30:04.400 11:41:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:04.400 00:30:04.400 real 0m11.531s 00:30:04.400 user 0m20.252s 00:30:04.400 sys 0m4.088s 00:30:04.400 11:41:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.400 11:41:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.400 ************************************ 00:30:04.400 END TEST nvmf_target_disconnect_tc2 00:30:04.400 ************************************ 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:04.663 rmmod nvme_tcp 00:30:04.663 rmmod nvme_fabrics 00:30:04.663 rmmod nvme_keyring 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3739380 ']' 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3739380 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3739380 ']' 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3739380 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3739380 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3739380' 00:30:04.663 killing process with pid 3739380 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3739380 00:30:04.663 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3739380 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.923 11:41:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.858 11:41:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.858 00:30:06.858 real 0m21.393s 00:30:06.858 user 0m48.640s 00:30:06.858 sys 0m9.799s 00:30:06.858 11:41:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.858 11:41:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:06.858 ************************************ 00:30:06.858 END TEST nvmf_target_disconnect 00:30:06.858 ************************************ 00:30:06.858 11:41:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:06.858 11:41:35 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:06.858 11:41:35 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:06.858 11:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.858 11:41:35 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:06.858 00:30:06.858 real 22m36.656s 00:30:06.858 user 47m14.422s 00:30:06.858 sys 7m6.066s 00:30:06.858 11:41:35 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.858 11:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.858 ************************************ 00:30:06.858 END TEST nvmf_tcp 00:30:06.858 ************************************ 00:30:07.121 11:41:35 -- common/autotest_common.sh@1142 -- # return 0 00:30:07.121 11:41:35 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:07.121 11:41:35 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:07.121 11:41:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:07.121 11:41:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.121 11:41:35 -- common/autotest_common.sh@10 -- # set +x 00:30:07.121 ************************************ 00:30:07.121 START TEST spdkcli_nvmf_tcp 00:30:07.121 ************************************ 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:07.121 * Looking for test storage... 00:30:07.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:07.121 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3741209 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3741209 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3741209 ']' 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.122 11:41:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.122 [2024-07-15 11:41:35.800145] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:30:07.122 [2024-07-15 11:41:35.800213] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3741209 ] 00:30:07.382 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.382 [2024-07-15 11:41:35.865924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:07.382 [2024-07-15 11:41:35.940700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.382 [2024-07-15 11:41:35.940703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.953 11:41:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:07.953 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:07.953 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:07.953 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:07.953 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:07.953 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:07.953 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:07.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:07.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:07.953 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:07.953 ' 00:30:10.498 [2024-07-15 11:41:39.185401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.882 [2024-07-15 11:41:40.485599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:14.424 [2024-07-15 11:41:42.896743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:16.353 [2024-07-15 11:41:44.978995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:18.289 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:18.289 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:18.289 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:18.289 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:18.289 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:18.289 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:18.289 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:18.289 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.289 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.289 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:18.289 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:18.289 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:18.289 11:41:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.551 11:41:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:18.551 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:18.551 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:18.551 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:18.551 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:18.551 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:18.551 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:18.551 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:18.551 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:18.551 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:18.551 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:18.551 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:18.551 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:18.551 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:18.551 ' 00:30:23.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:23.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:23.837 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:23.837 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:23.837 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:23.837 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:23.837 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:23.837 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:23.837 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:23.837 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:23.837 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:23.837 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:23.837 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:23.837 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3741209 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3741209 ']' 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3741209 00:30:23.837 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3741209 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3741209' 00:30:24.098 killing process with pid 3741209 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3741209 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3741209 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3741209 ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3741209 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3741209 ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3741209 00:30:24.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3741209) - No such process 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3741209 is not found' 00:30:24.098 Process with pid 3741209 is not found 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:24.098 00:30:24.098 real 0m17.119s 00:30:24.098 user 0m37.446s 00:30:24.098 sys 0m0.843s 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.098 11:41:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.098 ************************************ 00:30:24.098 END TEST spdkcli_nvmf_tcp 00:30:24.098 ************************************ 00:30:24.098 11:41:52 -- common/autotest_common.sh@1142 -- # return 0 00:30:24.098 11:41:52 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:24.098 11:41:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:24.098 11:41:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.098 11:41:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.359 ************************************ 00:30:24.359 START TEST nvmf_identify_passthru 00:30:24.359 ************************************ 00:30:24.359 11:41:52 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:24.359 * Looking for test storage... 00:30:24.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:24.359 11:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.359 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.360 11:41:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.360 11:41:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.360 11:41:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.360 11:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.360 11:41:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.360 11:41:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.360 11:41:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:24.360 11:41:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.360 11:41:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.360 11:41:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:24.360 11:41:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:24.360 11:41:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:24.360 11:41:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:30.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:30.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:30.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:30.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.952 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.953 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.953 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.953 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.953 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.953 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:30:31.214 00:30:31.214 --- 10.0.0.2 ping statistics --- 00:30:31.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.214 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:30:31.214 00:30:31.214 --- 10.0.0.1 ping statistics --- 00:30:31.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.214 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:31.214 11:41:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:31.214 11:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.214 11:41:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:31.214 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:31.475 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:31.475 11:41:59 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:31.475 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:31.475 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:31.475 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:31.475 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:31.475 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:31.475 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:31.475 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:31.475 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:31.475 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.046 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:32.046 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:32.046 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:32.046 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:32.046 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.307 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:32.307 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:32.307 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:32.307 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.307 11:42:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:32.307 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:32.307 11:42:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.307 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3748343 00:30:32.307 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.307 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:32.307 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3748343 00:30:32.307 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3748343 ']' 00:30:32.307 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.307 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:32.307 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.307 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:32.307 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.568 [2024-07-15 11:42:01.058873] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:30:32.568 [2024-07-15 11:42:01.058939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.568 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.568 [2024-07-15 11:42:01.129162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.568 [2024-07-15 11:42:01.204448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.568 [2024-07-15 11:42:01.204485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.568 [2024-07-15 11:42:01.204493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.568 [2024-07-15 11:42:01.204500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.568 [2024-07-15 11:42:01.204505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.568 [2024-07-15 11:42:01.204671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.568 [2024-07-15 11:42:01.204800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.568 [2024-07-15 11:42:01.204957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.568 [2024-07-15 11:42:01.204958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:33.139 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.139 INFO: Log level set to 20 00:30:33.139 INFO: Requests: 00:30:33.139 { 00:30:33.139 "jsonrpc": "2.0", 00:30:33.139 "method": "nvmf_set_config", 00:30:33.139 "id": 1, 00:30:33.139 "params": { 00:30:33.139 "admin_cmd_passthru": { 00:30:33.139 "identify_ctrlr": true 00:30:33.139 } 00:30:33.139 } 00:30:33.139 } 00:30:33.139 00:30:33.139 INFO: response: 00:30:33.139 { 00:30:33.139 "jsonrpc": "2.0", 00:30:33.139 "id": 1, 00:30:33.139 "result": true 00:30:33.139 } 00:30:33.139 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.139 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.139 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.139 INFO: Setting log level to 20 00:30:33.139 INFO: Setting log level to 20 00:30:33.139 INFO: Log level set to 20 00:30:33.139 INFO: Log level set to 20 00:30:33.139 INFO: Requests: 00:30:33.139 { 00:30:33.139 "jsonrpc": "2.0", 00:30:33.139 "method": "framework_start_init", 00:30:33.139 "id": 1 00:30:33.139 } 00:30:33.139 00:30:33.139 INFO: Requests: 00:30:33.139 { 00:30:33.139 "jsonrpc": "2.0", 00:30:33.139 "method": "framework_start_init", 00:30:33.139 "id": 1 00:30:33.139 } 00:30:33.139 00:30:33.401 [2024-07-15 11:42:01.900542] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:33.401 INFO: response: 00:30:33.401 { 00:30:33.401 "jsonrpc": "2.0", 00:30:33.401 "id": 1, 00:30:33.401 "result": true 00:30:33.401 } 00:30:33.401 00:30:33.401 INFO: response: 00:30:33.401 { 00:30:33.401 "jsonrpc": "2.0", 00:30:33.401 "id": 1, 00:30:33.401 "result": true 00:30:33.401 } 00:30:33.401 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.401 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.401 INFO: Setting log level to 40 00:30:33.401 INFO: Setting log level to 40 00:30:33.401 INFO: Setting log level to 40 00:30:33.401 [2024-07-15 11:42:01.913844] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.401 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.401 11:42:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.401 11:42:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.661 Nvme0n1 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.661 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.661 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.661 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.661 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.662 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.662 [2024-07-15 11:42:02.301463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.662 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.662 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:33.662 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.662 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.662 [ 00:30:33.662 { 00:30:33.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:33.662 "subtype": "Discovery", 00:30:33.662 "listen_addresses": [], 00:30:33.662 "allow_any_host": true, 00:30:33.662 "hosts": [] 00:30:33.662 }, 00:30:33.662 { 00:30:33.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:33.662 "subtype": "NVMe", 00:30:33.662 "listen_addresses": [ 00:30:33.662 { 00:30:33.662 "trtype": "TCP", 00:30:33.662 "adrfam": "IPv4", 00:30:33.662 "traddr": "10.0.0.2", 00:30:33.662 "trsvcid": "4420" 00:30:33.662 } 00:30:33.662 ], 00:30:33.662 "allow_any_host": true, 00:30:33.662 "hosts": [], 00:30:33.662 "serial_number": "SPDK00000000000001", 00:30:33.662 "model_number": "SPDK bdev Controller", 00:30:33.662 "max_namespaces": 1, 00:30:33.662 "min_cntlid": 1, 00:30:33.662 "max_cntlid": 65519, 00:30:33.662 "namespaces": [ 00:30:33.662 { 00:30:33.662 "nsid": 1, 00:30:33.662 "bdev_name": "Nvme0n1", 00:30:33.662 "name": "Nvme0n1", 00:30:33.662 "nguid": "36344730526054870025384500000044", 00:30:33.662 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:33.662 } 00:30:33.662 ] 00:30:33.662 } 00:30:33.662 ] 00:30:33.662 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.662 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:33.662 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:33.662 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:33.662 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.923 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:33.923 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:33.923 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:33.923 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:33.923 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.183 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:34.184 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.184 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:34.184 11:42:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:34.184 rmmod nvme_tcp 00:30:34.184 rmmod nvme_fabrics 00:30:34.184 rmmod nvme_keyring 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3748343 ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3748343 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3748343 ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3748343 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3748343 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3748343' 00:30:34.184 killing process with pid 3748343 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3748343 00:30:34.184 11:42:02 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3748343 00:30:34.444 11:42:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:34.444 11:42:03 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:34.444 11:42:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:34.444 11:42:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:34.444 11:42:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:34.444 11:42:03 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.444 11:42:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:34.444 11:42:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.989 11:42:05 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:36.989 00:30:36.989 real 0m12.308s 00:30:36.989 user 0m9.542s 00:30:36.989 sys 0m5.904s 00:30:36.989 11:42:05 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:36.989 11:42:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:36.989 ************************************ 00:30:36.989 END TEST nvmf_identify_passthru 00:30:36.989 ************************************ 00:30:36.989 11:42:05 -- common/autotest_common.sh@1142 -- # return 0 00:30:36.989 11:42:05 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:36.989 11:42:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:36.989 11:42:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.989 11:42:05 -- common/autotest_common.sh@10 -- # set +x 00:30:36.989 ************************************ 00:30:36.989 START TEST nvmf_dif 00:30:36.989 ************************************ 00:30:36.989 11:42:05 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:36.989 * Looking for test storage... 00:30:36.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.989 11:42:05 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.989 11:42:05 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.989 11:42:05 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.989 11:42:05 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.989 11:42:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.989 11:42:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.989 11:42:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.989 11:42:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:36.989 11:42:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:36.989 11:42:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:36.989 11:42:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:36.989 11:42:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:36.989 11:42:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:36.989 11:42:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:36.989 11:42:05 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.990 11:42:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:36.990 11:42:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:36.990 11:42:05 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:36.990 11:42:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:43.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:43.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:43.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:43.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:43.595 11:42:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:43.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:30:43.856 00:30:43.856 --- 10.0.0.2 ping statistics --- 00:30:43.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.856 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:30:43.856 00:30:43.856 --- 10.0.0.1 ping statistics --- 00:30:43.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.856 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:43.856 11:42:12 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:47.194 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:47.194 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:47.194 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:47.194 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:47.194 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:47.194 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:47.195 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:47.195 11:42:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:47.195 11:42:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3754811 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3754811 00:30:47.195 11:42:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3754811 ']' 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.195 11:42:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:47.195 [2024-07-15 11:42:15.755860] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:30:47.195 [2024-07-15 11:42:15.755920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.195 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.195 [2024-07-15 11:42:15.828840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.457 [2024-07-15 11:42:15.903900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.457 [2024-07-15 11:42:15.903941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.457 [2024-07-15 11:42:15.903948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.457 [2024-07-15 11:42:15.903954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.457 [2024-07-15 11:42:15.903960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.457 [2024-07-15 11:42:15.903980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:48.030 11:42:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 11:42:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.030 11:42:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:48.030 11:42:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 [2024-07-15 11:42:16.583204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.030 11:42:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 ************************************ 00:30:48.030 START TEST fio_dif_1_default 00:30:48.030 ************************************ 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 bdev_null0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:48.030 [2024-07-15 11:42:16.671541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:48.030 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.031 { 00:30:48.031 "params": { 00:30:48.031 "name": "Nvme$subsystem", 00:30:48.031 "trtype": "$TEST_TRANSPORT", 00:30:48.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.031 "adrfam": "ipv4", 00:30:48.031 "trsvcid": "$NVMF_PORT", 00:30:48.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.031 "hdgst": ${hdgst:-false}, 00:30:48.031 "ddgst": ${ddgst:-false} 00:30:48.031 }, 00:30:48.031 "method": "bdev_nvme_attach_controller" 00:30:48.031 } 00:30:48.031 EOF 00:30:48.031 )") 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:48.031 "params": { 00:30:48.031 "name": "Nvme0", 00:30:48.031 "trtype": "tcp", 00:30:48.031 "traddr": "10.0.0.2", 00:30:48.031 "adrfam": "ipv4", 00:30:48.031 "trsvcid": "4420", 00:30:48.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.031 "hdgst": false, 00:30:48.031 "ddgst": false 00:30:48.031 }, 00:30:48.031 "method": "bdev_nvme_attach_controller" 00:30:48.031 }' 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:48.031 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.319 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.319 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.319 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:48.319 11:42:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.578 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:48.578 fio-3.35 00:30:48.578 Starting 1 thread 00:30:48.579 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.828 00:31:00.828 filename0: (groupid=0, jobs=1): err= 0: pid=3755295: Mon Jul 15 11:42:27 2024 00:31:00.828 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10018msec) 00:31:00.828 slat (nsec): min=5401, max=32769, avg=6321.55, stdev=1488.00 00:31:00.828 clat (usec): min=723, max=43302, avg=21571.81, stdev=20171.87 00:31:00.828 lat (usec): min=731, max=43334, avg=21578.13, stdev=20171.86 00:31:00.828 clat percentiles (usec): 00:31:00.828 | 1.00th=[ 1106], 5.00th=[ 1270], 10.00th=[ 1303], 20.00th=[ 1319], 00:31:00.828 | 30.00th=[ 1336], 40.00th=[ 1352], 50.00th=[41157], 60.00th=[41681], 00:31:00.828 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:00.828 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:00.828 | 99.99th=[43254] 00:31:00.828 bw ( KiB/s): min= 672, max= 768, per=99.86%, avg=740.80, stdev=33.28, samples=20 00:31:00.828 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:31:00.828 lat (usec) : 750=0.22%, 1000=0.22% 00:31:00.828 lat (msec) : 2=49.35%, 50=50.22% 00:31:00.828 cpu : usr=95.33%, sys=4.47%, ctx=18, majf=0, minf=242 00:31:00.828 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.828 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.828 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:00.828 00:31:00.828 Run status group 0 (all jobs): 00:31:00.828 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10018-10018msec 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 00:31:00.828 real 0m11.032s 00:31:00.828 user 0m22.374s 00:31:00.828 sys 0m0.730s 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 ************************************ 00:31:00.828 END TEST fio_dif_1_default 00:31:00.828 ************************************ 00:31:00.828 11:42:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:00.828 11:42:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:00.828 11:42:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:00.828 11:42:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 ************************************ 00:31:00.828 START TEST fio_dif_1_multi_subsystems 00:31:00.828 ************************************ 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 bdev_null0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 [2024-07-15 11:42:27.781604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 bdev_null1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:00.828 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.829 { 00:31:00.829 "params": { 00:31:00.829 "name": "Nvme$subsystem", 00:31:00.829 "trtype": "$TEST_TRANSPORT", 00:31:00.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.829 "adrfam": "ipv4", 00:31:00.829 "trsvcid": "$NVMF_PORT", 00:31:00.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.829 "hdgst": ${hdgst:-false}, 00:31:00.829 "ddgst": ${ddgst:-false} 00:31:00.829 }, 00:31:00.829 "method": "bdev_nvme_attach_controller" 00:31:00.829 } 00:31:00.829 EOF 00:31:00.829 )") 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:00.829 { 00:31:00.829 "params": { 00:31:00.829 "name": "Nvme$subsystem", 00:31:00.829 "trtype": "$TEST_TRANSPORT", 00:31:00.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.829 "adrfam": "ipv4", 00:31:00.829 "trsvcid": "$NVMF_PORT", 00:31:00.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.829 "hdgst": ${hdgst:-false}, 00:31:00.829 "ddgst": ${ddgst:-false} 00:31:00.829 }, 00:31:00.829 "method": "bdev_nvme_attach_controller" 00:31:00.829 } 00:31:00.829 EOF 00:31:00.829 )") 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:00.829 "params": { 00:31:00.829 "name": "Nvme0", 00:31:00.829 "trtype": "tcp", 00:31:00.829 "traddr": "10.0.0.2", 00:31:00.829 "adrfam": "ipv4", 00:31:00.829 "trsvcid": "4420", 00:31:00.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.829 "hdgst": false, 00:31:00.829 "ddgst": false 00:31:00.829 }, 00:31:00.829 "method": "bdev_nvme_attach_controller" 00:31:00.829 },{ 00:31:00.829 "params": { 00:31:00.829 "name": "Nvme1", 00:31:00.829 "trtype": "tcp", 00:31:00.829 "traddr": "10.0.0.2", 00:31:00.829 "adrfam": "ipv4", 00:31:00.829 "trsvcid": "4420", 00:31:00.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:00.829 "hdgst": false, 00:31:00.829 "ddgst": false 00:31:00.829 }, 00:31:00.829 "method": "bdev_nvme_attach_controller" 00:31:00.829 }' 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:00.829 11:42:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.829 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:00.829 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:00.829 fio-3.35 00:31:00.829 Starting 2 threads 00:31:00.829 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.831 00:31:10.831 filename0: (groupid=0, jobs=1): err= 0: pid=3757745: Mon Jul 15 11:42:39 2024 00:31:10.831 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:31:10.831 slat (nsec): min=5423, max=37341, avg=6333.39, stdev=2030.61 00:31:10.831 clat (usec): min=1080, max=43183, avg=21575.74, stdev=20157.57 00:31:10.831 lat (usec): min=1085, max=43213, avg=21582.07, stdev=20157.58 00:31:10.831 clat percentiles (usec): 00:31:10.831 | 1.00th=[ 1205], 5.00th=[ 1287], 10.00th=[ 1303], 20.00th=[ 1319], 00:31:10.831 | 30.00th=[ 1352], 40.00th=[ 1369], 50.00th=[41157], 60.00th=[41681], 00:31:10.831 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:10.831 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:31:10.831 | 99.99th=[43254] 00:31:10.831 bw ( KiB/s): min= 672, max= 768, per=66.01%, avg=740.80, stdev=34.86, samples=20 00:31:10.831 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:31:10.831 lat (msec) : 2=49.78%, 50=50.22% 00:31:10.831 cpu : usr=96.89%, sys=2.90%, ctx=12, majf=0, minf=102 00:31:10.831 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.831 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.831 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:10.832 filename1: (groupid=0, jobs=1): err= 0: pid=3757746: Mon Jul 15 11:42:39 2024 00:31:10.832 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10002msec) 00:31:10.832 slat (nsec): min=5413, max=56553, avg=6524.46, stdev=3045.92 00:31:10.832 clat (usec): min=41796, max=43171, avg=42003.72, stdev=155.08 00:31:10.832 lat (usec): min=41802, max=43201, avg=42010.25, stdev=155.53 00:31:10.832 clat percentiles (usec): 00:31:10.832 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:10.832 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:10.832 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:10.832 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:10.832 | 99.99th=[43254] 00:31:10.832 bw ( KiB/s): min= 352, max= 384, per=33.90%, avg=380.63, stdev=10.09, samples=19 00:31:10.832 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:10.832 lat (msec) : 50=100.00% 00:31:10.832 cpu : usr=96.83%, sys=2.96%, ctx=9, majf=0, minf=176 00:31:10.832 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.832 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.832 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:10.832 00:31:10.832 Run status group 0 (all jobs): 00:31:10.832 READ: bw=1121KiB/s (1148kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10002-10020msec 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 00:31:10.832 real 0m11.503s 00:31:10.832 user 0m38.792s 00:31:10.832 sys 0m0.957s 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 ************************************ 00:31:10.832 END TEST fio_dif_1_multi_subsystems 00:31:10.832 ************************************ 00:31:10.832 11:42:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:10.832 11:42:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:10.832 11:42:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:10.832 11:42:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 ************************************ 00:31:10.832 START TEST fio_dif_rand_params 00:31:10.832 ************************************ 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 bdev_null0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.832 [2024-07-15 11:42:39.364515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.832 { 00:31:10.832 "params": { 00:31:10.832 "name": "Nvme$subsystem", 00:31:10.832 "trtype": "$TEST_TRANSPORT", 00:31:10.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.832 "adrfam": "ipv4", 00:31:10.832 "trsvcid": "$NVMF_PORT", 00:31:10.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.832 "hdgst": ${hdgst:-false}, 00:31:10.832 "ddgst": ${ddgst:-false} 00:31:10.832 }, 00:31:10.832 "method": "bdev_nvme_attach_controller" 00:31:10.832 } 00:31:10.832 EOF 00:31:10.832 )") 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.832 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:10.833 "params": { 00:31:10.833 "name": "Nvme0", 00:31:10.833 "trtype": "tcp", 00:31:10.833 "traddr": "10.0.0.2", 00:31:10.833 "adrfam": "ipv4", 00:31:10.833 "trsvcid": "4420", 00:31:10.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.833 "hdgst": false, 00:31:10.833 "ddgst": false 00:31:10.833 }, 00:31:10.833 "method": "bdev_nvme_attach_controller" 00:31:10.833 }' 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.833 11:42:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.402 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:11.402 ... 00:31:11.402 fio-3.35 00:31:11.402 Starting 3 threads 00:31:11.402 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.997 00:31:17.997 filename0: (groupid=0, jobs=1): err= 0: pid=3759940: Mon Jul 15 11:42:45 2024 00:31:17.997 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(132MiB/5047msec) 00:31:17.997 slat (nsec): min=5430, max=33834, avg=6521.81, stdev=1570.83 00:31:17.997 clat (usec): min=5453, max=57462, avg=14326.36, stdev=13542.46 00:31:17.997 lat (usec): min=5459, max=57469, avg=14332.88, stdev=13542.54 00:31:17.997 clat percentiles (usec): 00:31:17.997 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7635], 00:31:17.997 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:31:17.997 | 70.00th=[11469], 80.00th=[12649], 90.00th=[50070], 95.00th=[52167], 00:31:17.997 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[57410], 00:31:17.997 | 99.99th=[57410] 00:31:17.997 bw ( KiB/s): min=13568, max=39680, per=39.08%, avg=26910.20, stdev=7243.54, samples=10 00:31:17.997 iops : min= 106, max= 310, avg=210.20, stdev=56.61, samples=10 00:31:17.997 lat (msec) : 10=52.52%, 20=36.09%, 50=1.90%, 100=9.50% 00:31:17.997 cpu : usr=96.77%, sys=2.93%, ctx=24, majf=0, minf=176 00:31:17.997 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.997 issued rwts: total=1053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.997 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.997 filename0: (groupid=0, jobs=1): err= 0: pid=3759941: Mon Jul 15 11:42:45 2024 00:31:17.997 read: IOPS=129, BW=16.2MiB/s (17.0MB/s)(81.9MiB/5051msec) 00:31:17.997 slat (nsec): min=5503, max=37901, avg=8553.64, stdev=1836.57 00:31:17.997 clat (usec): min=7086, max=94936, avg=23051.36, stdev=20439.44 00:31:17.997 lat (usec): min=7094, max=94945, avg=23059.92, stdev=20439.33 00:31:17.997 clat percentiles (usec): 00:31:17.997 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:31:17.997 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12518], 60.00th=[13435], 00:31:17.997 | 70.00th=[14615], 80.00th=[51119], 90.00th=[53216], 95.00th=[54789], 00:31:17.997 | 99.00th=[93848], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:31:17.997 | 99.99th=[94897] 00:31:17.997 bw ( KiB/s): min=12032, max=23808, per=24.24%, avg=16691.20, stdev=3697.60, samples=10 00:31:17.997 iops : min= 94, max= 186, avg=130.40, stdev=28.89, samples=10 00:31:17.997 lat (msec) : 10=11.30%, 20=63.36%, 50=1.07%, 100=24.27% 00:31:17.997 cpu : usr=96.20%, sys=3.54%, ctx=11, majf=0, minf=49 00:31:17.997 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.997 issued rwts: total=655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.997 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.997 filename0: (groupid=0, jobs=1): err= 0: pid=3759942: Mon Jul 15 11:42:45 2024 00:31:17.997 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(126MiB/5008msec) 00:31:17.997 slat (nsec): min=5441, max=34710, avg=6923.94, stdev=1735.41 00:31:17.997 clat (usec): min=5093, max=93823, avg=14878.56, stdev=15425.40 00:31:17.997 lat (usec): min=5099, max=93830, avg=14885.49, stdev=15425.41 00:31:17.997 clat percentiles (usec): 00:31:17.997 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 8094], 00:31:17.997 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10421], 00:31:17.997 | 70.00th=[11076], 80.00th=[12125], 90.00th=[49546], 95.00th=[51643], 00:31:17.997 | 99.00th=[90702], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:31:17.997 | 99.99th=[93848] 00:31:17.997 bw ( KiB/s): min=12288, max=38656, per=37.40%, avg=25753.60, stdev=7249.02, samples=10 00:31:17.997 iops : min= 96, max= 302, avg=201.20, stdev=56.63, samples=10 00:31:17.997 lat (msec) : 10=54.91%, 20=33.10%, 50=2.87%, 100=9.12% 00:31:17.997 cpu : usr=95.59%, sys=3.40%, ctx=353, majf=0, minf=74 00:31:17.997 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.997 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.997 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.997 00:31:17.997 Run status group 0 (all jobs): 00:31:17.997 READ: bw=67.2MiB/s (70.5MB/s), 16.2MiB/s-26.1MiB/s (17.0MB/s-27.3MB/s), io=340MiB (356MB), run=5008-5051msec 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:17.997 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 bdev_null0 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 [2024-07-15 11:42:45.666472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 bdev_null1 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 bdev_null2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:17.998 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:17.998 { 00:31:17.998 "params": { 00:31:17.998 "name": "Nvme$subsystem", 00:31:17.998 "trtype": "$TEST_TRANSPORT", 00:31:17.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.998 "adrfam": "ipv4", 00:31:17.998 "trsvcid": "$NVMF_PORT", 00:31:17.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.998 "hdgst": ${hdgst:-false}, 00:31:17.998 "ddgst": ${ddgst:-false} 00:31:17.998 }, 00:31:17.998 "method": "bdev_nvme_attach_controller" 00:31:17.998 } 00:31:17.998 EOF 00:31:17.999 )") 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:17.999 { 00:31:17.999 "params": { 00:31:17.999 "name": "Nvme$subsystem", 00:31:17.999 "trtype": "$TEST_TRANSPORT", 00:31:17.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.999 "adrfam": "ipv4", 00:31:17.999 "trsvcid": "$NVMF_PORT", 00:31:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.999 "hdgst": ${hdgst:-false}, 00:31:17.999 "ddgst": ${ddgst:-false} 00:31:17.999 }, 00:31:17.999 "method": "bdev_nvme_attach_controller" 00:31:17.999 } 00:31:17.999 EOF 00:31:17.999 )") 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:17.999 { 00:31:17.999 "params": { 00:31:17.999 "name": "Nvme$subsystem", 00:31:17.999 "trtype": "$TEST_TRANSPORT", 00:31:17.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.999 "adrfam": "ipv4", 00:31:17.999 "trsvcid": "$NVMF_PORT", 00:31:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.999 "hdgst": ${hdgst:-false}, 00:31:17.999 "ddgst": ${ddgst:-false} 00:31:17.999 }, 00:31:17.999 "method": "bdev_nvme_attach_controller" 00:31:17.999 } 00:31:17.999 EOF 00:31:17.999 )") 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:17.999 "params": { 00:31:17.999 "name": "Nvme0", 00:31:17.999 "trtype": "tcp", 00:31:17.999 "traddr": "10.0.0.2", 00:31:17.999 "adrfam": "ipv4", 00:31:17.999 "trsvcid": "4420", 00:31:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.999 "hdgst": false, 00:31:17.999 "ddgst": false 00:31:17.999 }, 00:31:17.999 "method": "bdev_nvme_attach_controller" 00:31:17.999 },{ 00:31:17.999 "params": { 00:31:17.999 "name": "Nvme1", 00:31:17.999 "trtype": "tcp", 00:31:17.999 "traddr": "10.0.0.2", 00:31:17.999 "adrfam": "ipv4", 00:31:17.999 "trsvcid": "4420", 00:31:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:17.999 "hdgst": false, 00:31:17.999 "ddgst": false 00:31:17.999 }, 00:31:17.999 "method": "bdev_nvme_attach_controller" 00:31:17.999 },{ 00:31:17.999 "params": { 00:31:17.999 "name": "Nvme2", 00:31:17.999 "trtype": "tcp", 00:31:17.999 "traddr": "10.0.0.2", 00:31:17.999 "adrfam": "ipv4", 00:31:17.999 "trsvcid": "4420", 00:31:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.999 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:17.999 "hdgst": false, 00:31:17.999 "ddgst": false 00:31:17.999 }, 00:31:17.999 "method": "bdev_nvme_attach_controller" 00:31:17.999 }' 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:17.999 11:42:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:17.999 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:17.999 ... 00:31:18.000 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:18.000 ... 00:31:18.000 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:18.000 ... 00:31:18.000 fio-3.35 00:31:18.000 Starting 24 threads 00:31:18.000 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.220 00:31:30.220 filename0: (groupid=0, jobs=1): err= 0: pid=3761450: Mon Jul 15 11:42:57 2024 00:31:30.220 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10023msec) 00:31:30.220 slat (nsec): min=5416, max=82091, avg=10626.57, stdev=8673.33 00:31:30.220 clat (usec): min=12687, max=54951, avg=30462.82, stdev=4771.27 00:31:30.220 lat (usec): min=12694, max=54957, avg=30473.44, stdev=4772.26 00:31:30.220 clat percentiles (usec): 00:31:30.220 | 1.00th=[14484], 5.00th=[19530], 10.00th=[23200], 20.00th=[29754], 00:31:30.220 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:30.220 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:30.220 | 99.00th=[42206], 99.50th=[48497], 99.90th=[54789], 99.95th=[54789], 00:31:30.220 | 99.99th=[54789] 00:31:30.221 bw ( KiB/s): min= 1920, max= 2288, per=4.40%, avg=2093.60, stdev=115.53, samples=20 00:31:30.221 iops : min= 480, max= 572, avg=523.40, stdev=28.88, samples=20 00:31:30.221 lat (msec) : 20=6.15%, 50=93.62%, 100=0.23% 00:31:30.221 cpu : usr=98.72%, sys=0.91%, ctx=18, majf=0, minf=9 00:31:30.221 IO depths : 1=4.5%, 2=9.0%, 4=19.6%, 8=58.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=92.8%, 8=2.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=5250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761451: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10020msec) 00:31:30.221 slat (nsec): min=5411, max=97503, avg=18300.64, stdev=15047.63 00:31:30.221 clat (usec): min=14337, max=66750, avg=32562.78, stdev=4425.95 00:31:30.221 lat (usec): min=14356, max=66782, avg=32581.08, stdev=4425.90 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[19530], 5.00th=[26084], 10.00th=[30540], 20.00th=[31327], 00:31:30.221 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.221 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35390], 95.00th=[41157], 00:31:30.221 | 99.00th=[49546], 99.50th=[54264], 99.90th=[56886], 99.95th=[57410], 00:31:30.221 | 99.99th=[66847] 00:31:30.221 bw ( KiB/s): min= 1792, max= 2096, per=4.11%, avg=1955.79, stdev=79.62, samples=19 00:31:30.221 iops : min= 448, max= 524, avg=488.95, stdev=19.90, samples=19 00:31:30.221 lat (msec) : 20=1.29%, 50=97.78%, 100=0.94% 00:31:30.221 cpu : usr=99.05%, sys=0.60%, ctx=16, majf=0, minf=9 00:31:30.221 IO depths : 1=2.8%, 2=6.1%, 4=16.4%, 8=64.0%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=92.3%, 8=2.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=4901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761452: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10006msec) 00:31:30.221 slat (nsec): min=5412, max=77977, avg=12274.44, stdev=9927.13 00:31:30.221 clat (usec): min=17152, max=51697, avg=31860.27, stdev=4271.60 00:31:30.221 lat (usec): min=17158, max=51705, avg=31872.54, stdev=4272.47 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[18744], 5.00th=[22938], 10.00th=[28181], 20.00th=[31065], 00:31:30.221 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.221 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[39060], 00:31:30.221 | 99.00th=[47973], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:31:30.221 | 99.99th=[51643] 00:31:30.221 bw ( KiB/s): min= 1920, max= 2080, per=4.20%, avg=2001.68, stdev=63.25, samples=19 00:31:30.221 iops : min= 480, max= 520, avg=500.42, stdev=15.81, samples=19 00:31:30.221 lat (msec) : 20=2.59%, 50=96.81%, 100=0.60% 00:31:30.221 cpu : usr=98.75%, sys=0.90%, ctx=18, majf=0, minf=9 00:31:30.221 IO depths : 1=4.2%, 2=8.4%, 4=19.8%, 8=58.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=5010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761453: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.4MiB/10033msec) 00:31:30.221 slat (nsec): min=5411, max=85723, avg=16936.05, stdev=13231.67 00:31:30.221 clat (usec): min=16112, max=59286, avg=32210.64, stdev=4242.86 00:31:30.221 lat (usec): min=16133, max=59292, avg=32227.58, stdev=4243.70 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[20055], 5.00th=[25297], 10.00th=[29754], 20.00th=[31065], 00:31:30.221 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.221 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[38536], 00:31:30.221 | 99.00th=[47449], 99.50th=[53740], 99.90th=[59507], 99.95th=[59507], 00:31:30.221 | 99.99th=[59507] 00:31:30.221 bw ( KiB/s): min= 1920, max= 2112, per=4.15%, avg=1977.60, stdev=60.18, samples=20 00:31:30.221 iops : min= 480, max= 528, avg=494.40, stdev=15.05, samples=20 00:31:30.221 lat (msec) : 20=1.07%, 50=98.04%, 100=0.89% 00:31:30.221 cpu : usr=98.68%, sys=0.97%, ctx=17, majf=0, minf=9 00:31:30.221 IO depths : 1=3.6%, 2=7.4%, 4=17.7%, 8=61.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761454: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10018msec) 00:31:30.221 slat (nsec): min=5422, max=99516, avg=15108.82, stdev=13631.16 00:31:30.221 clat (usec): min=14762, max=51849, avg=32110.64, stdev=2110.19 00:31:30.221 lat (usec): min=14771, max=51858, avg=32125.74, stdev=2109.24 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[25297], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:30.221 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.221 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:30.221 | 99.00th=[41157], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:31:30.221 | 99.99th=[51643] 00:31:30.221 bw ( KiB/s): min= 1896, max= 2048, per=4.16%, avg=1982.80, stdev=67.10, samples=20 00:31:30.221 iops : min= 474, max= 512, avg=495.70, stdev=16.77, samples=20 00:31:30.221 lat (msec) : 20=0.30%, 50=99.66%, 100=0.04% 00:31:30.221 cpu : usr=98.97%, sys=0.69%, ctx=16, majf=0, minf=9 00:31:30.221 IO depths : 1=5.1%, 2=10.5%, 4=23.3%, 8=53.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=4973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761455: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10003msec) 00:31:30.221 slat (nsec): min=5414, max=88523, avg=17063.47, stdev=12894.78 00:31:30.221 clat (usec): min=11433, max=73762, avg=32807.79, stdev=4926.58 00:31:30.221 lat (usec): min=11439, max=73780, avg=32824.85, stdev=4925.94 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[17433], 5.00th=[28967], 10.00th=[30802], 20.00th=[31327], 00:31:30.221 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.221 | 70.00th=[32637], 80.00th=[33162], 90.00th=[36963], 95.00th=[43254], 00:31:30.221 | 99.00th=[53216], 99.50th=[57410], 99.90th=[58983], 99.95th=[73925], 00:31:30.221 | 99.99th=[73925] 00:31:30.221 bw ( KiB/s): min= 1747, max= 2048, per=4.06%, avg=1932.79, stdev=93.77, samples=19 00:31:30.221 iops : min= 436, max= 512, avg=483.16, stdev=23.53, samples=19 00:31:30.221 lat (msec) : 20=1.69%, 50=96.89%, 100=1.42% 00:31:30.221 cpu : usr=98.98%, sys=0.70%, ctx=54, majf=0, minf=9 00:31:30.221 IO depths : 1=2.0%, 2=4.2%, 4=12.3%, 8=69.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=91.3%, 8=4.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761456: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10023msec) 00:31:30.221 slat (nsec): min=5459, max=99566, avg=18077.06, stdev=14484.46 00:31:30.221 clat (usec): min=11596, max=58758, avg=31986.05, stdev=4179.26 00:31:30.221 lat (usec): min=11603, max=58764, avg=32004.13, stdev=4179.58 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[19792], 5.00th=[25560], 10.00th=[29754], 20.00th=[31065], 00:31:30.221 | 30.00th=[31327], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.221 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[38536], 00:31:30.221 | 99.00th=[50070], 99.50th=[51643], 99.90th=[57934], 99.95th=[57934], 00:31:30.221 | 99.99th=[58983] 00:31:30.221 bw ( KiB/s): min= 1920, max= 2096, per=4.18%, avg=1990.40, stdev=57.90, samples=20 00:31:30.221 iops : min= 480, max= 524, avg=497.60, stdev=14.47, samples=20 00:31:30.221 lat (msec) : 20=1.00%, 50=98.00%, 100=1.00% 00:31:30.221 cpu : usr=98.99%, sys=0.66%, ctx=14, majf=0, minf=9 00:31:30.221 IO depths : 1=3.1%, 2=7.3%, 4=19.4%, 8=60.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename0: (groupid=0, jobs=1): err= 0: pid=3761457: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10006msec) 00:31:30.221 slat (usec): min=5, max=104, avg=17.91, stdev=15.64 00:31:30.221 clat (usec): min=10643, max=59839, avg=32169.67, stdev=5861.40 00:31:30.221 lat (usec): min=10649, max=59845, avg=32187.59, stdev=5861.38 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[18220], 5.00th=[21890], 10.00th=[25035], 20.00th=[30016], 00:31:30.221 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.221 | 70.00th=[32637], 80.00th=[33817], 90.00th=[39584], 95.00th=[42730], 00:31:30.221 | 99.00th=[50594], 99.50th=[53740], 99.90th=[60031], 99.95th=[60031], 00:31:30.221 | 99.99th=[60031] 00:31:30.221 bw ( KiB/s): min= 1832, max= 2192, per=4.15%, avg=1975.16, stdev=112.98, samples=19 00:31:30.221 iops : min= 458, max= 548, avg=493.79, stdev=28.24, samples=19 00:31:30.221 lat (msec) : 20=2.28%, 50=96.51%, 100=1.21% 00:31:30.221 cpu : usr=98.92%, sys=0.73%, ctx=20, majf=0, minf=9 00:31:30.221 IO depths : 1=1.3%, 2=3.1%, 4=11.3%, 8=71.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:30.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 complete : 0=0.0%, 4=91.1%, 8=4.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.221 issued rwts: total=4959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.221 filename1: (groupid=0, jobs=1): err= 0: pid=3761458: Mon Jul 15 11:42:57 2024 00:31:30.221 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10029msec) 00:31:30.221 slat (usec): min=5, max=111, avg=19.36, stdev=16.67 00:31:30.221 clat (usec): min=16293, max=56647, avg=31895.76, stdev=4545.93 00:31:30.221 lat (usec): min=16301, max=56656, avg=31915.12, stdev=4546.40 00:31:30.221 clat percentiles (usec): 00:31:30.221 | 1.00th=[17957], 5.00th=[22414], 10.00th=[29492], 20.00th=[31065], 00:31:30.221 | 30.00th=[31327], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.221 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[39584], 00:31:30.221 | 99.00th=[48497], 99.50th=[50070], 99.90th=[54789], 99.95th=[54789], 00:31:30.221 | 99.99th=[56886] 00:31:30.221 bw ( KiB/s): min= 1920, max= 2160, per=4.20%, avg=1998.00, stdev=62.88, samples=20 00:31:30.221 iops : min= 480, max= 540, avg=499.50, stdev=15.72, samples=20 00:31:30.222 lat (msec) : 20=2.70%, 50=96.78%, 100=0.52% 00:31:30.222 cpu : usr=97.71%, sys=1.26%, ctx=91, majf=0, minf=9 00:31:30.222 IO depths : 1=3.1%, 2=6.8%, 4=18.2%, 8=62.1%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761459: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10023msec) 00:31:30.222 slat (nsec): min=5422, max=86106, avg=17012.67, stdev=13769.56 00:31:30.222 clat (usec): min=14339, max=56391, avg=31867.48, stdev=4601.87 00:31:30.222 lat (usec): min=14348, max=56401, avg=31884.49, stdev=4603.27 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[18482], 5.00th=[23462], 10.00th=[26870], 20.00th=[30802], 00:31:30.222 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:30.222 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34866], 95.00th=[40109], 00:31:30.222 | 99.00th=[47449], 99.50th=[50070], 99.90th=[53740], 99.95th=[56361], 00:31:30.222 | 99.99th=[56361] 00:31:30.222 bw ( KiB/s): min= 1792, max= 2272, per=4.20%, avg=1998.40, stdev=106.72, samples=20 00:31:30.222 iops : min= 448, max= 568, avg=499.60, stdev=26.68, samples=20 00:31:30.222 lat (msec) : 20=2.27%, 50=97.15%, 100=0.58% 00:31:30.222 cpu : usr=98.99%, sys=0.68%, ctx=19, majf=0, minf=9 00:31:30.222 IO depths : 1=3.0%, 2=6.6%, 4=16.8%, 8=63.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=92.2%, 8=2.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=5012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761460: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=522, BW=2089KiB/s (2139kB/s)(20.4MiB/10006msec) 00:31:30.222 slat (usec): min=5, max=105, avg=12.18, stdev=11.48 00:31:30.222 clat (usec): min=8267, max=57308, avg=30539.14, stdev=4842.08 00:31:30.222 lat (usec): min=8273, max=57333, avg=30551.32, stdev=4843.48 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[13304], 5.00th=[19792], 10.00th=[23462], 20.00th=[29754], 00:31:30.222 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:30.222 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:30.222 | 99.00th=[40633], 99.50th=[45876], 99.90th=[57410], 99.95th=[57410], 00:31:30.222 | 99.99th=[57410] 00:31:30.222 bw ( KiB/s): min= 1920, max= 2480, per=4.39%, avg=2092.63, stdev=159.16, samples=19 00:31:30.222 iops : min= 480, max= 620, avg=523.16, stdev=39.79, samples=19 00:31:30.222 lat (msec) : 10=0.48%, 20=4.75%, 50=94.36%, 100=0.42% 00:31:30.222 cpu : usr=98.90%, sys=0.74%, ctx=23, majf=0, minf=9 00:31:30.222 IO depths : 1=4.6%, 2=9.3%, 4=20.4%, 8=57.4%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=5226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761461: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10009msec) 00:31:30.222 slat (usec): min=5, max=136, avg=21.38, stdev=16.73 00:31:30.222 clat (usec): min=12919, max=57163, avg=32059.93, stdev=4430.06 00:31:30.222 lat (usec): min=12927, max=57180, avg=32081.30, stdev=4430.38 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[17695], 5.00th=[24511], 10.00th=[30016], 20.00th=[31065], 00:31:30.222 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.222 | 70.00th=[32637], 80.00th=[32900], 90.00th=[34341], 95.00th=[39060], 00:31:30.222 | 99.00th=[47973], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:31:30.222 | 99.99th=[57410] 00:31:30.222 bw ( KiB/s): min= 1795, max= 2160, per=4.17%, avg=1984.95, stdev=98.42, samples=20 00:31:30.222 iops : min= 448, max= 540, avg=496.20, stdev=24.68, samples=20 00:31:30.222 lat (msec) : 20=2.17%, 50=97.04%, 100=0.79% 00:31:30.222 cpu : usr=97.56%, sys=1.38%, ctx=50, majf=0, minf=9 00:31:30.222 IO depths : 1=3.1%, 2=6.8%, 4=17.0%, 8=62.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=92.5%, 8=3.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761462: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=480, BW=1922KiB/s (1969kB/s)(18.8MiB/10004msec) 00:31:30.222 slat (usec): min=5, max=138, avg=18.62, stdev=16.85 00:31:30.222 clat (usec): min=3880, max=76646, avg=33172.81, stdev=5881.01 00:31:30.222 lat (usec): min=3888, max=76669, avg=33191.43, stdev=5880.37 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[17695], 5.00th=[25560], 10.00th=[30016], 20.00th=[31327], 00:31:30.222 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:31:30.222 | 70.00th=[32900], 80.00th=[33817], 90.00th=[40109], 95.00th=[45351], 00:31:30.222 | 99.00th=[50070], 99.50th=[52167], 99.90th=[77071], 99.95th=[77071], 00:31:30.222 | 99.99th=[77071] 00:31:30.222 bw ( KiB/s): min= 1667, max= 2048, per=4.02%, avg=1914.26, stdev=87.36, samples=19 00:31:30.222 iops : min= 416, max= 512, avg=478.53, stdev=21.96, samples=19 00:31:30.222 lat (msec) : 4=0.21%, 20=1.98%, 50=96.80%, 100=1.02% 00:31:30.222 cpu : usr=99.06%, sys=0.59%, ctx=33, majf=0, minf=9 00:31:30.222 IO depths : 1=1.8%, 2=3.5%, 4=11.2%, 8=70.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=91.1%, 8=5.6%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761463: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10003msec) 00:31:30.222 slat (usec): min=5, max=123, avg=24.41, stdev=17.04 00:31:30.222 clat (usec): min=12675, max=79881, avg=32493.53, stdev=3511.03 00:31:30.222 lat (usec): min=12681, max=79899, avg=32517.94, stdev=3511.42 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[28705], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:30.222 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.222 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[37487], 00:31:30.222 | 99.00th=[45876], 99.50th=[46924], 99.90th=[79168], 99.95th=[79168], 00:31:30.222 | 99.99th=[80217] 00:31:30.222 bw ( KiB/s): min= 1498, max= 2048, per=4.08%, avg=1944.95, stdev=132.45, samples=19 00:31:30.222 iops : min= 374, max= 512, avg=486.21, stdev=33.21, samples=19 00:31:30.222 lat (msec) : 20=0.45%, 50=99.22%, 100=0.33% 00:31:30.222 cpu : usr=98.58%, sys=0.75%, ctx=25, majf=0, minf=9 00:31:30.222 IO depths : 1=5.4%, 2=10.8%, 4=23.7%, 8=52.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=4891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761464: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=505, BW=2023KiB/s (2071kB/s)(19.8MiB/10023msec) 00:31:30.222 slat (nsec): min=5427, max=99155, avg=16533.26, stdev=13064.38 00:31:30.222 clat (usec): min=12646, max=48750, avg=31512.02, stdev=3418.68 00:31:30.222 lat (usec): min=12652, max=48758, avg=31528.56, stdev=3419.70 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[18482], 5.00th=[23987], 10.00th=[29754], 20.00th=[31065], 00:31:30.222 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.222 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:30.222 | 99.00th=[44303], 99.50th=[44827], 99.90th=[47449], 99.95th=[48497], 00:31:30.222 | 99.99th=[48497] 00:31:30.222 bw ( KiB/s): min= 1920, max= 2448, per=4.24%, avg=2020.80, stdev=121.39, samples=20 00:31:30.222 iops : min= 480, max= 612, avg=505.20, stdev=30.35, samples=20 00:31:30.222 lat (msec) : 20=2.35%, 50=97.65% 00:31:30.222 cpu : usr=98.69%, sys=0.95%, ctx=23, majf=0, minf=9 00:31:30.222 IO depths : 1=4.9%, 2=9.9%, 4=22.3%, 8=55.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=5068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename1: (groupid=0, jobs=1): err= 0: pid=3761465: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10002msec) 00:31:30.222 slat (nsec): min=5430, max=92629, avg=19207.37, stdev=15081.17 00:31:30.222 clat (usec): min=15487, max=52221, avg=32250.71, stdev=3761.88 00:31:30.222 lat (usec): min=15493, max=52233, avg=32269.91, stdev=3761.54 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[19268], 5.00th=[29492], 10.00th=[30802], 20.00th=[31065], 00:31:30.222 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.222 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[38011], 00:31:30.222 | 99.00th=[47973], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:31:30.222 | 99.99th=[52167] 00:31:30.222 bw ( KiB/s): min= 1792, max= 2096, per=4.12%, avg=1963.79, stdev=80.68, samples=19 00:31:30.222 iops : min= 448, max= 524, avg=490.95, stdev=20.17, samples=19 00:31:30.222 lat (msec) : 20=1.42%, 50=98.18%, 100=0.41% 00:31:30.222 cpu : usr=99.04%, sys=0.63%, ctx=15, majf=0, minf=9 00:31:30.222 IO depths : 1=4.2%, 2=9.0%, 4=21.0%, 8=56.9%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:30.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.222 issued rwts: total=4936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.222 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.222 filename2: (groupid=0, jobs=1): err= 0: pid=3761466: Mon Jul 15 11:42:57 2024 00:31:30.222 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10017msec) 00:31:30.222 slat (usec): min=5, max=121, avg=12.89, stdev=11.72 00:31:30.222 clat (usec): min=10533, max=58338, avg=31317.28, stdev=3868.65 00:31:30.222 lat (usec): min=10539, max=58346, avg=31330.18, stdev=3869.40 00:31:30.222 clat percentiles (usec): 00:31:30.222 | 1.00th=[17957], 5.00th=[22414], 10.00th=[27395], 20.00th=[31065], 00:31:30.222 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.222 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:31:30.222 | 99.00th=[42206], 99.50th=[43779], 99.90th=[58459], 99.95th=[58459], 00:31:30.222 | 99.99th=[58459] 00:31:30.222 bw ( KiB/s): min= 1920, max= 2392, per=4.27%, avg=2034.00, stdev=119.17, samples=20 00:31:30.222 iops : min= 480, max= 598, avg=508.50, stdev=29.79, samples=20 00:31:30.222 lat (msec) : 20=2.25%, 50=97.55%, 100=0.20% 00:31:30.223 cpu : usr=98.62%, sys=1.02%, ctx=16, majf=0, minf=9 00:31:30.223 IO depths : 1=5.2%, 2=10.4%, 4=21.8%, 8=55.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=5101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761467: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10018msec) 00:31:30.223 slat (nsec): min=5467, max=93522, avg=16446.06, stdev=13382.23 00:31:30.223 clat (usec): min=15965, max=49689, avg=31787.41, stdev=2571.12 00:31:30.223 lat (usec): min=15975, max=49698, avg=31803.85, stdev=2571.16 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[20579], 5.00th=[28443], 10.00th=[30802], 20.00th=[31327], 00:31:30.223 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.223 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:30.223 | 99.00th=[39584], 99.50th=[41681], 99.90th=[49021], 99.95th=[49546], 00:31:30.223 | 99.99th=[49546] 00:31:30.223 bw ( KiB/s): min= 1896, max= 2128, per=4.20%, avg=2002.00, stdev=72.76, samples=20 00:31:30.223 iops : min= 474, max= 532, avg=500.50, stdev=18.19, samples=20 00:31:30.223 lat (msec) : 20=0.64%, 50=99.36% 00:31:30.223 cpu : usr=98.98%, sys=0.67%, ctx=18, majf=0, minf=9 00:31:30.223 IO depths : 1=5.2%, 2=10.7%, 4=23.0%, 8=53.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=5021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761468: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10008msec) 00:31:30.223 slat (nsec): min=5407, max=93180, avg=18697.52, stdev=14389.46 00:31:30.223 clat (usec): min=6858, max=63305, avg=32663.69, stdev=4665.45 00:31:30.223 lat (usec): min=6868, max=63326, avg=32682.39, stdev=4665.13 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[17433], 5.00th=[28443], 10.00th=[30802], 20.00th=[31327], 00:31:30.223 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.223 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35914], 95.00th=[42730], 00:31:30.223 | 99.00th=[50070], 99.50th=[53740], 99.90th=[59507], 99.95th=[59507], 00:31:30.223 | 99.99th=[63177] 00:31:30.223 bw ( KiB/s): min= 1840, max= 2048, per=4.09%, avg=1946.95, stdev=65.77, samples=19 00:31:30.223 iops : min= 460, max= 512, avg=486.74, stdev=16.44, samples=19 00:31:30.223 lat (msec) : 10=0.12%, 20=1.70%, 50=97.05%, 100=1.13% 00:31:30.223 cpu : usr=98.99%, sys=0.65%, ctx=17, majf=0, minf=9 00:31:30.223 IO depths : 1=3.1%, 2=6.5%, 4=16.5%, 8=63.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761469: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10011msec) 00:31:30.223 slat (usec): min=5, max=102, avg=18.50, stdev=15.49 00:31:30.223 clat (usec): min=12707, max=58208, avg=33158.73, stdev=5466.46 00:31:30.223 lat (usec): min=12716, max=58227, avg=33177.23, stdev=5465.83 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[18220], 5.00th=[24773], 10.00th=[29754], 20.00th=[31065], 00:31:30.223 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.223 | 70.00th=[32900], 80.00th=[34341], 90.00th=[41157], 95.00th=[44827], 00:31:30.223 | 99.00th=[50070], 99.50th=[51643], 99.90th=[54789], 99.95th=[57934], 00:31:30.223 | 99.99th=[58459] 00:31:30.223 bw ( KiB/s): min= 1632, max= 2048, per=4.03%, avg=1919.60, stdev=99.63, samples=20 00:31:30.223 iops : min= 408, max= 512, avg=479.90, stdev=24.91, samples=20 00:31:30.223 lat (msec) : 20=1.75%, 50=97.38%, 100=0.87% 00:31:30.223 cpu : usr=99.02%, sys=0.65%, ctx=15, majf=0, minf=9 00:31:30.223 IO depths : 1=1.7%, 2=4.0%, 4=13.8%, 8=67.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=92.0%, 8=3.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=4811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761470: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=507, BW=2031KiB/s (2080kB/s)(19.9MiB/10018msec) 00:31:30.223 slat (nsec): min=5410, max=97805, avg=15760.12, stdev=13603.42 00:31:30.223 clat (usec): min=12397, max=53930, avg=31386.38, stdev=4064.55 00:31:30.223 lat (usec): min=12404, max=53944, avg=31402.14, stdev=4065.51 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[17957], 5.00th=[22152], 10.00th=[26608], 20.00th=[31065], 00:31:30.223 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:30.223 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[35390], 00:31:30.223 | 99.00th=[43779], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:31:30.223 | 99.99th=[53740] 00:31:30.223 bw ( KiB/s): min= 1920, max= 2240, per=4.26%, avg=2028.40, stdev=97.41, samples=20 00:31:30.223 iops : min= 480, max= 560, avg=507.10, stdev=24.35, samples=20 00:31:30.223 lat (msec) : 20=2.61%, 50=96.95%, 100=0.43% 00:31:30.223 cpu : usr=98.70%, sys=0.94%, ctx=19, majf=0, minf=9 00:31:30.223 IO depths : 1=4.0%, 2=8.4%, 4=19.6%, 8=58.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=5087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761471: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=481, BW=1926KiB/s (1973kB/s)(18.8MiB/10004msec) 00:31:30.223 slat (nsec): min=5404, max=88888, avg=15997.24, stdev=14292.16 00:31:30.223 clat (usec): min=6852, max=63384, avg=33130.91, stdev=5618.97 00:31:30.223 lat (usec): min=6858, max=63421, avg=33146.90, stdev=5618.94 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[18220], 5.00th=[25035], 10.00th=[29754], 20.00th=[31065], 00:31:30.223 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.223 | 70.00th=[32900], 80.00th=[34341], 90.00th=[40109], 95.00th=[45876], 00:31:30.223 | 99.00th=[51643], 99.50th=[52167], 99.90th=[54264], 99.95th=[63177], 00:31:30.223 | 99.99th=[63177] 00:31:30.223 bw ( KiB/s): min= 1792, max= 2048, per=4.03%, avg=1920.00, stdev=69.64, samples=19 00:31:30.223 iops : min= 448, max= 512, avg=480.00, stdev=17.41, samples=19 00:31:30.223 lat (msec) : 10=0.12%, 20=1.41%, 50=96.58%, 100=1.89% 00:31:30.223 cpu : usr=98.98%, sys=0.68%, ctx=18, majf=0, minf=9 00:31:30.223 IO depths : 1=1.0%, 2=2.0%, 4=8.5%, 8=74.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=90.5%, 8=6.6%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=4818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761472: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.1MiB/10002msec) 00:31:30.223 slat (nsec): min=5409, max=85168, avg=15982.60, stdev=13129.04 00:31:30.223 clat (usec): min=15472, max=65690, avg=32548.25, stdev=4447.42 00:31:30.223 lat (usec): min=15499, max=65708, avg=32564.23, stdev=4447.40 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[17957], 5.00th=[28443], 10.00th=[30802], 20.00th=[31327], 00:31:30.223 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.223 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[41681], 00:31:30.223 | 99.00th=[50594], 99.50th=[52691], 99.90th=[58983], 99.95th=[65799], 00:31:30.223 | 99.99th=[65799] 00:31:30.223 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1952.00, stdev=70.70, samples=19 00:31:30.223 iops : min= 448, max= 512, avg=488.00, stdev=17.68, samples=19 00:31:30.223 lat (msec) : 20=1.47%, 50=97.51%, 100=1.02% 00:31:30.223 cpu : usr=99.06%, sys=0.61%, ctx=16, majf=0, minf=9 00:31:30.223 IO depths : 1=3.3%, 2=7.8%, 4=20.4%, 8=58.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=93.2%, 8=1.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=4898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 filename2: (groupid=0, jobs=1): err= 0: pid=3761473: Mon Jul 15 11:42:57 2024 00:31:30.223 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10022msec) 00:31:30.223 slat (usec): min=5, max=100, avg=15.42, stdev=13.74 00:31:30.223 clat (usec): min=15593, max=57360, avg=31913.99, stdev=4349.16 00:31:30.223 lat (usec): min=15627, max=57370, avg=31929.41, stdev=4350.32 00:31:30.223 clat percentiles (usec): 00:31:30.223 | 1.00th=[19530], 5.00th=[23725], 10.00th=[28705], 20.00th=[31065], 00:31:30.223 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32113], 00:31:30.223 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[38011], 00:31:30.223 | 99.00th=[49546], 99.50th=[53740], 99.90th=[57410], 99.95th=[57410], 00:31:30.223 | 99.99th=[57410] 00:31:30.223 bw ( KiB/s): min= 1856, max= 2096, per=4.19%, avg=1996.80, stdev=68.98, samples=20 00:31:30.223 iops : min= 464, max= 524, avg=499.20, stdev=17.25, samples=20 00:31:30.223 lat (msec) : 20=1.46%, 50=97.58%, 100=0.96% 00:31:30.223 cpu : usr=98.86%, sys=0.81%, ctx=18, majf=0, minf=9 00:31:30.223 IO depths : 1=2.7%, 2=6.1%, 4=15.9%, 8=64.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:30.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 complete : 0=0.0%, 4=91.9%, 8=3.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.223 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.223 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.223 00:31:30.223 Run status group 0 (all jobs): 00:31:30.223 READ: bw=46.5MiB/s (48.8MB/s), 1922KiB/s-2095KiB/s (1968kB/s-2145kB/s), io=467MiB (489MB), run=10002-10033msec 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.223 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 bdev_null0 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 [2024-07-15 11:42:57.489812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 bdev_null1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.224 { 00:31:30.224 "params": { 00:31:30.224 "name": "Nvme$subsystem", 00:31:30.224 "trtype": "$TEST_TRANSPORT", 00:31:30.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.224 "adrfam": "ipv4", 00:31:30.224 "trsvcid": "$NVMF_PORT", 00:31:30.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.224 "hdgst": ${hdgst:-false}, 00:31:30.224 "ddgst": ${ddgst:-false} 00:31:30.224 }, 00:31:30.224 "method": "bdev_nvme_attach_controller" 00:31:30.224 } 00:31:30.224 EOF 00:31:30.224 )") 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.224 { 00:31:30.224 "params": { 00:31:30.224 "name": "Nvme$subsystem", 00:31:30.224 "trtype": "$TEST_TRANSPORT", 00:31:30.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.224 "adrfam": "ipv4", 00:31:30.224 "trsvcid": "$NVMF_PORT", 00:31:30.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.224 "hdgst": ${hdgst:-false}, 00:31:30.224 "ddgst": ${ddgst:-false} 00:31:30.224 }, 00:31:30.224 "method": "bdev_nvme_attach_controller" 00:31:30.224 } 00:31:30.224 EOF 00:31:30.224 )") 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:30.224 11:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.224 "params": { 00:31:30.224 "name": "Nvme0", 00:31:30.224 "trtype": "tcp", 00:31:30.224 "traddr": "10.0.0.2", 00:31:30.224 "adrfam": "ipv4", 00:31:30.224 "trsvcid": "4420", 00:31:30.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.224 "hdgst": false, 00:31:30.224 "ddgst": false 00:31:30.224 }, 00:31:30.224 "method": "bdev_nvme_attach_controller" 00:31:30.225 },{ 00:31:30.225 "params": { 00:31:30.225 "name": "Nvme1", 00:31:30.225 "trtype": "tcp", 00:31:30.225 "traddr": "10.0.0.2", 00:31:30.225 "adrfam": "ipv4", 00:31:30.225 "trsvcid": "4420", 00:31:30.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.225 "hdgst": false, 00:31:30.225 "ddgst": false 00:31:30.225 }, 00:31:30.225 "method": "bdev_nvme_attach_controller" 00:31:30.225 }' 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.225 11:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.225 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:30.225 ... 00:31:30.225 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:30.225 ... 00:31:30.225 fio-3.35 00:31:30.225 Starting 4 threads 00:31:30.225 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.556 00:31:35.556 filename0: (groupid=0, jobs=1): err= 0: pid=3763741: Mon Jul 15 11:43:03 2024 00:31:35.556 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5003msec) 00:31:35.556 slat (nsec): min=5390, max=54891, avg=6000.45, stdev=1785.57 00:31:35.556 clat (usec): min=1896, max=44174, avg=3777.08, stdev=1262.43 00:31:35.556 lat (usec): min=1902, max=44203, avg=3783.09, stdev=1262.56 00:31:35.556 clat percentiles (usec): 00:31:35.556 | 1.00th=[ 2474], 5.00th=[ 2868], 10.00th=[ 3064], 20.00th=[ 3326], 00:31:35.556 | 30.00th=[ 3458], 40.00th=[ 3589], 50.00th=[ 3720], 60.00th=[ 3785], 00:31:35.556 | 70.00th=[ 3851], 80.00th=[ 4146], 90.00th=[ 4555], 95.00th=[ 4883], 00:31:35.556 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6128], 99.95th=[44303], 00:31:35.556 | 99.99th=[44303] 00:31:35.556 bw ( KiB/s): min=15792, max=17296, per=25.37%, avg=16894.22, stdev=456.73, samples=9 00:31:35.556 iops : min= 1974, max= 2162, avg=2111.78, stdev=57.09, samples=9 00:31:35.556 lat (msec) : 2=0.06%, 4=74.85%, 10=25.02%, 50=0.08% 00:31:35.556 cpu : usr=96.32%, sys=3.42%, ctx=13, majf=0, minf=94 00:31:35.556 IO depths : 1=0.4%, 2=1.6%, 4=69.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 issued rwts: total=10549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.556 filename0: (groupid=0, jobs=1): err= 0: pid=3763742: Mon Jul 15 11:43:03 2024 00:31:35.556 read: IOPS=2118, BW=16.6MiB/s (17.4MB/s)(82.8MiB/5002msec) 00:31:35.556 slat (nsec): min=5382, max=33655, avg=7250.33, stdev=2577.09 00:31:35.556 clat (usec): min=1303, max=7347, avg=3756.00, stdev=657.70 00:31:35.556 lat (usec): min=1311, max=7355, avg=3763.25, stdev=657.61 00:31:35.556 clat percentiles (usec): 00:31:35.556 | 1.00th=[ 1565], 5.00th=[ 2802], 10.00th=[ 3097], 20.00th=[ 3359], 00:31:35.556 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3752], 60.00th=[ 3785], 00:31:35.556 | 70.00th=[ 3916], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4883], 00:31:35.556 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6259], 99.95th=[ 6259], 00:31:35.556 | 99.99th=[ 7308] 00:31:35.556 bw ( KiB/s): min=16208, max=18512, per=25.42%, avg=16933.33, stdev=650.42, samples=9 00:31:35.556 iops : min= 2026, max= 2314, avg=2116.67, stdev=81.30, samples=9 00:31:35.556 lat (msec) : 2=1.76%, 4=70.71%, 10=27.52% 00:31:35.556 cpu : usr=96.64%, sys=3.12%, ctx=9, majf=0, minf=41 00:31:35.556 IO depths : 1=0.4%, 2=2.0%, 4=68.8%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 issued rwts: total=10598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.556 filename1: (groupid=0, jobs=1): err= 0: pid=3763743: Mon Jul 15 11:43:03 2024 00:31:35.556 read: IOPS=2072, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5002msec) 00:31:35.556 slat (nsec): min=5388, max=34590, avg=7439.99, stdev=2472.14 00:31:35.556 clat (usec): min=1922, max=44976, avg=3839.40, stdev=1295.81 00:31:35.556 lat (usec): min=1930, max=45003, avg=3846.84, stdev=1295.94 00:31:35.556 clat percentiles (usec): 00:31:35.556 | 1.00th=[ 2573], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3359], 00:31:35.556 | 30.00th=[ 3490], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3785], 00:31:35.556 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 4948], 00:31:35.556 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6390], 99.95th=[44827], 00:31:35.556 | 99.99th=[44827] 00:31:35.556 bw ( KiB/s): min=15310, max=16992, per=24.79%, avg=16513.56, stdev=507.41, samples=9 00:31:35.556 iops : min= 1913, max= 2124, avg=2064.11, stdev=63.65, samples=9 00:31:35.556 lat (msec) : 2=0.05%, 4=71.01%, 10=28.86%, 50=0.08% 00:31:35.556 cpu : usr=97.02%, sys=2.72%, ctx=8, majf=0, minf=25 00:31:35.556 IO depths : 1=0.4%, 2=1.6%, 4=70.1%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 issued rwts: total=10366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.556 filename1: (groupid=0, jobs=1): err= 0: pid=3763744: Mon Jul 15 11:43:03 2024 00:31:35.556 read: IOPS=2027, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5001msec) 00:31:35.556 slat (nsec): min=5390, max=28428, avg=6047.39, stdev=1847.69 00:31:35.556 clat (usec): min=1496, max=44158, avg=3929.41, stdev=1302.27 00:31:35.556 lat (usec): min=1502, max=44185, avg=3935.46, stdev=1302.41 00:31:35.556 clat percentiles (usec): 00:31:35.556 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3195], 20.00th=[ 3425], 00:31:35.556 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3851], 00:31:35.556 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5145], 00:31:35.556 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6521], 99.95th=[44303], 00:31:35.556 | 99.99th=[44303] 00:31:35.556 bw ( KiB/s): min=14845, max=16496, per=24.33%, avg=16205.89, stdev=515.95, samples=9 00:31:35.556 iops : min= 1855, max= 2062, avg=2025.67, stdev=64.70, samples=9 00:31:35.556 lat (msec) : 2=0.05%, 4=65.27%, 10=34.60%, 50=0.08% 00:31:35.556 cpu : usr=96.84%, sys=2.90%, ctx=9, majf=0, minf=43 00:31:35.556 IO depths : 1=0.3%, 2=1.8%, 4=69.1%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.556 issued rwts: total=10139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.556 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.556 00:31:35.556 Run status group 0 (all jobs): 00:31:35.556 READ: bw=65.0MiB/s (68.2MB/s), 15.8MiB/s-16.6MiB/s (16.6MB/s-17.4MB/s), io=325MiB (341MB), run=5001-5003msec 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.556 00:31:35.556 real 0m24.534s 00:31:35.556 user 5m17.573s 00:31:35.556 sys 0m4.052s 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:35.556 11:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.556 ************************************ 00:31:35.556 END TEST fio_dif_rand_params 00:31:35.556 ************************************ 00:31:35.556 11:43:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:35.556 11:43:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:35.556 11:43:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:35.556 11:43:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.556 11:43:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:35.556 ************************************ 00:31:35.556 START TEST fio_dif_digest 00:31:35.556 ************************************ 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:35.556 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.557 bdev_null0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.557 [2024-07-15 11:43:03.977946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.557 { 00:31:35.557 "params": { 00:31:35.557 "name": "Nvme$subsystem", 00:31:35.557 "trtype": "$TEST_TRANSPORT", 00:31:35.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.557 "adrfam": "ipv4", 00:31:35.557 "trsvcid": "$NVMF_PORT", 00:31:35.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.557 "hdgst": ${hdgst:-false}, 00:31:35.557 "ddgst": ${ddgst:-false} 00:31:35.557 }, 00:31:35.557 "method": "bdev_nvme_attach_controller" 00:31:35.557 } 00:31:35.557 EOF 00:31:35.557 )") 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:35.557 11:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:35.557 "params": { 00:31:35.557 "name": "Nvme0", 00:31:35.557 "trtype": "tcp", 00:31:35.557 "traddr": "10.0.0.2", 00:31:35.557 "adrfam": "ipv4", 00:31:35.557 "trsvcid": "4420", 00:31:35.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.557 "hdgst": true, 00:31:35.557 "ddgst": true 00:31:35.557 }, 00:31:35.557 "method": "bdev_nvme_attach_controller" 00:31:35.557 }' 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:35.557 11:43:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.818 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:35.818 ... 00:31:35.818 fio-3.35 00:31:35.818 Starting 3 threads 00:31:35.818 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.052 00:31:48.052 filename0: (groupid=0, jobs=1): err= 0: pid=3765174: Mon Jul 15 11:43:14 2024 00:31:48.052 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(213MiB/10049msec) 00:31:48.052 slat (nsec): min=5647, max=59773, avg=7111.47, stdev=1966.09 00:31:48.052 clat (usec): min=7472, max=96622, avg=17686.65, stdev=14067.39 00:31:48.052 lat (usec): min=7481, max=96629, avg=17693.76, stdev=14067.40 00:31:48.052 clat percentiles (usec): 00:31:48.052 | 1.00th=[ 8029], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11207], 00:31:48.052 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13566], 60.00th=[14091], 00:31:48.052 | 70.00th=[14615], 80.00th=[15270], 90.00th=[51643], 95.00th=[53740], 00:31:48.052 | 99.00th=[56886], 99.50th=[93848], 99.90th=[95945], 99.95th=[96994], 00:31:48.052 | 99.99th=[96994] 00:31:48.052 bw ( KiB/s): min=13312, max=26112, per=32.22%, avg=21747.20, stdev=3202.26, samples=20 00:31:48.052 iops : min= 104, max= 204, avg=169.90, stdev=25.02, samples=20 00:31:48.052 lat (msec) : 10=8.00%, 20=80.89%, 50=0.12%, 100=10.99% 00:31:48.052 cpu : usr=96.33%, sys=3.44%, ctx=21, majf=0, minf=162 00:31:48.052 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.052 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.052 filename0: (groupid=0, jobs=1): err= 0: pid=3765175: Mon Jul 15 11:43:14 2024 00:31:48.052 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10047msec) 00:31:48.052 slat (nsec): min=5651, max=32804, avg=6699.89, stdev=1171.33 00:31:48.052 clat (usec): min=6204, max=97266, avg=16084.24, stdev=11751.26 00:31:48.052 lat (usec): min=6211, max=97273, avg=16090.94, stdev=11751.28 00:31:48.052 clat percentiles (usec): 00:31:48.052 | 1.00th=[ 7373], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[10945], 00:31:48.052 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13304], 60.00th=[13960], 00:31:48.052 | 70.00th=[14484], 80.00th=[15139], 90.00th=[16450], 95.00th=[52691], 00:31:48.052 | 99.00th=[56361], 99.50th=[57410], 99.90th=[95945], 99.95th=[96994], 00:31:48.052 | 99.99th=[96994] 00:31:48.052 bw ( KiB/s): min=18432, max=33536, per=35.42%, avg=23910.40, stdev=4013.83, samples=20 00:31:48.052 iops : min= 144, max= 262, avg=186.80, stdev=31.36, samples=20 00:31:48.052 lat (msec) : 10=8.40%, 20=83.74%, 50=0.11%, 100=7.75% 00:31:48.052 cpu : usr=95.33%, sys=4.42%, ctx=27, majf=0, minf=103 00:31:48.052 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.052 issued rwts: total=1870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.052 filename0: (groupid=0, jobs=1): err= 0: pid=3765176: Mon Jul 15 11:43:14 2024 00:31:48.052 read: IOPS=172, BW=21.5MiB/s (22.5MB/s)(216MiB/10045msec) 00:31:48.052 slat (nsec): min=5725, max=31775, avg=6485.17, stdev=1013.47 00:31:48.052 clat (msec): min=5, max=134, avg=17.41, stdev=13.94 00:31:48.052 lat (msec): min=5, max=134, avg=17.41, stdev=13.94 00:31:48.052 clat percentiles (msec): 00:31:48.052 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:48.052 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:31:48.052 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 51], 95.00th=[ 55], 00:31:48.052 | 99.00th=[ 58], 99.50th=[ 93], 99.90th=[ 96], 99.95th=[ 136], 00:31:48.052 | 99.99th=[ 136] 00:31:48.052 bw ( KiB/s): min=12288, max=28160, per=32.73%, avg=22092.80, stdev=4194.27, samples=20 00:31:48.052 iops : min= 96, max= 220, avg=172.60, stdev=32.77, samples=20 00:31:48.052 lat (msec) : 10=10.82%, 20=78.82%, 50=0.17%, 100=10.13%, 250=0.06% 00:31:48.052 cpu : usr=95.74%, sys=4.02%, ctx=20, majf=0, minf=181 00:31:48.052 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.052 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.052 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.052 00:31:48.052 Run status group 0 (all jobs): 00:31:48.052 READ: bw=65.9MiB/s (69.1MB/s), 21.2MiB/s-23.3MiB/s (22.2MB/s-24.4MB/s), io=662MiB (695MB), run=10045-10049msec 00:31:48.052 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.053 00:31:48.053 real 0m11.210s 00:31:48.053 user 0m43.713s 00:31:48.053 sys 0m1.490s 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.053 11:43:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.053 ************************************ 00:31:48.053 END TEST fio_dif_digest 00:31:48.053 ************************************ 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:48.053 11:43:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:48.053 11:43:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.053 rmmod nvme_tcp 00:31:48.053 rmmod nvme_fabrics 00:31:48.053 rmmod nvme_keyring 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3754811 ']' 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3754811 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3754811 ']' 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3754811 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3754811 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3754811' 00:31:48.053 killing process with pid 3754811 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3754811 00:31:48.053 11:43:15 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3754811 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:48.053 11:43:15 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:50.602 Waiting for block devices as requested 00:31:50.602 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:50.602 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:50.602 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:50.602 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:50.602 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:50.602 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:50.602 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:50.862 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:50.862 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:51.123 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:51.123 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:51.123 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:51.123 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:51.384 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:51.384 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:51.384 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:51.384 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:51.956 11:43:20 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:51.956 11:43:20 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:51.956 11:43:20 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.956 11:43:20 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:51.956 11:43:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.956 11:43:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:51.956 11:43:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.871 11:43:22 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:53.871 00:31:53.871 real 1m17.245s 00:31:53.871 user 8m4.728s 00:31:53.871 sys 0m19.586s 00:31:53.871 11:43:22 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:53.871 11:43:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:53.871 ************************************ 00:31:53.871 END TEST nvmf_dif 00:31:53.871 ************************************ 00:31:53.871 11:43:22 -- common/autotest_common.sh@1142 -- # return 0 00:31:53.871 11:43:22 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:53.871 11:43:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:53.871 11:43:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.871 11:43:22 -- common/autotest_common.sh@10 -- # set +x 00:31:53.871 ************************************ 00:31:53.871 START TEST nvmf_abort_qd_sizes 00:31:53.871 ************************************ 00:31:53.871 11:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:54.132 * Looking for test storage... 00:31:54.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.132 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:54.133 11:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.276 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:02.277 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:02.277 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:02.277 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:02.277 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:02.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:32:02.277 00:32:02.277 --- 10.0.0.2 ping statistics --- 00:32:02.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.277 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:32:02.277 00:32:02.277 --- 10.0.0.1 ping statistics --- 00:32:02.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.277 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:02.277 11:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:04.826 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:04.826 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:05.109 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.109 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3774596 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3774596 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3774596 ']' 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:05.110 11:43:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:05.371 [2024-07-15 11:43:33.861952] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:32:05.371 [2024-07-15 11:43:33.862001] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.371 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.371 [2024-07-15 11:43:33.927918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:05.371 [2024-07-15 11:43:33.993973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.371 [2024-07-15 11:43:33.994011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.371 [2024-07-15 11:43:33.994019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.371 [2024-07-15 11:43:33.994026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.371 [2024-07-15 11:43:33.994031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.371 [2024-07-15 11:43:33.994175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.371 [2024-07-15 11:43:33.994452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.371 [2024-07-15 11:43:33.994501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.371 [2024-07-15 11:43:33.994502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.943 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:05.943 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:05.943 11:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:05.943 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.943 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.203 11:43:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.203 ************************************ 00:32:06.203 START TEST spdk_target_abort 00:32:06.203 ************************************ 00:32:06.203 11:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:06.203 11:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:06.203 11:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:06.203 11:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.203 11:43:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.464 spdk_targetn1 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.464 [2024-07-15 11:43:35.029156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.464 [2024-07-15 11:43:35.069379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:06.464 11:43:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:06.464 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.725 [2024-07-15 11:43:35.212289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:688 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:06.725 [2024-07-15 11:43:35.212318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0058 p:1 m:0 dnr:0 00:32:06.725 [2024-07-15 11:43:35.283887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2992 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:32:06.725 [2024-07-15 11:43:35.283906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:10.104 Initializing NVMe Controllers 00:32:10.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:10.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:10.104 Initialization complete. Launching workers. 00:32:10.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11218, failed: 2 00:32:10.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3333, failed to submit 7887 00:32:10.104 success 768, unsuccess 2565, failed 0 00:32:10.104 11:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.104 11:43:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:10.104 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.104 [2024-07-15 11:43:38.475253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:832 len:8 PRP1 0x200007c54000 PRP2 0x0 00:32:10.104 [2024-07-15 11:43:38.475285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:32:10.104 [2024-07-15 11:43:38.531316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2176 len:8 PRP1 0x200007c52000 PRP2 0x0 00:32:10.104 [2024-07-15 11:43:38.531341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:10.104 [2024-07-15 11:43:38.539193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:2344 len:8 PRP1 0x200007c56000 PRP2 0x0 00:32:10.104 [2024-07-15 11:43:38.539214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:12.649 [2024-07-15 11:43:40.854286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:56776 len:8 PRP1 0x200007c52000 PRP2 0x0 00:32:12.649 [2024-07-15 11:43:40.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00bf p:0 m:0 dnr:0 00:32:12.909 Initializing NVMe Controllers 00:32:12.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:12.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:12.909 Initialization complete. Launching workers. 00:32:12.909 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8850, failed: 4 00:32:12.909 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1209, failed to submit 7645 00:32:12.909 success 373, unsuccess 836, failed 0 00:32:12.909 11:43:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:12.909 11:43:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:13.170 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.468 Initializing NVMe Controllers 00:32:16.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:16.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:16.468 Initialization complete. Launching workers. 00:32:16.468 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41938, failed: 0 00:32:16.468 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2686, failed to submit 39252 00:32:16.468 success 596, unsuccess 2090, failed 0 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.468 11:43:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3774596 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3774596 ']' 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3774596 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774596 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774596' 00:32:18.382 killing process with pid 3774596 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3774596 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3774596 00:32:18.382 00:32:18.382 real 0m12.141s 00:32:18.382 user 0m49.111s 00:32:18.382 sys 0m2.064s 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.382 ************************************ 00:32:18.382 END TEST spdk_target_abort 00:32:18.382 ************************************ 00:32:18.382 11:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:18.382 11:43:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:18.382 11:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:18.382 11:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.382 11:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:18.382 ************************************ 00:32:18.382 START TEST kernel_target_abort 00:32:18.382 ************************************ 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:18.382 11:43:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:21.686 Waiting for block devices as requested 00:32:21.686 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:21.686 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:21.686 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:21.947 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:21.947 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:21.947 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:22.207 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:22.207 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:22.207 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:22.469 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:22.469 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:22.469 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:22.729 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:22.729 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:22.729 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:22.729 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:22.989 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:23.249 No valid GPT data, bailing 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:23.249 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:23.249 00:32:23.249 Discovery Log Number of Records 2, Generation counter 2 00:32:23.249 =====Discovery Log Entry 0====== 00:32:23.249 trtype: tcp 00:32:23.249 adrfam: ipv4 00:32:23.249 subtype: current discovery subsystem 00:32:23.249 treq: not specified, sq flow control disable supported 00:32:23.249 portid: 1 00:32:23.249 trsvcid: 4420 00:32:23.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:23.249 traddr: 10.0.0.1 00:32:23.249 eflags: none 00:32:23.249 sectype: none 00:32:23.249 =====Discovery Log Entry 1====== 00:32:23.249 trtype: tcp 00:32:23.249 adrfam: ipv4 00:32:23.249 subtype: nvme subsystem 00:32:23.249 treq: not specified, sq flow control disable supported 00:32:23.249 portid: 1 00:32:23.249 trsvcid: 4420 00:32:23.249 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:23.249 traddr: 10.0.0.1 00:32:23.249 eflags: none 00:32:23.249 sectype: none 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:23.250 11:43:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:23.511 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.811 Initializing NVMe Controllers 00:32:26.811 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:26.811 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:26.811 Initialization complete. Launching workers. 00:32:26.811 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50449, failed: 0 00:32:26.811 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50449, failed to submit 0 00:32:26.811 success 0, unsuccess 50449, failed 0 00:32:26.811 11:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:26.811 11:43:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:26.811 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.109 Initializing NVMe Controllers 00:32:30.109 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:30.109 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:30.109 Initialization complete. Launching workers. 00:32:30.109 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90401, failed: 0 00:32:30.109 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22774, failed to submit 67627 00:32:30.109 success 0, unsuccess 22774, failed 0 00:32:30.109 11:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:30.109 11:43:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:30.109 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.651 Initializing NVMe Controllers 00:32:32.651 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:32.651 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:32.651 Initialization complete. Launching workers. 00:32:32.651 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86983, failed: 0 00:32:32.651 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21702, failed to submit 65281 00:32:32.651 success 0, unsuccess 21702, failed 0 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:32.651 11:44:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:35.951 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:35.951 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:35.952 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:36.284 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:36.284 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:36.284 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:36.284 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:38.212 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:38.212 00:32:38.212 real 0m19.822s 00:32:38.212 user 0m8.276s 00:32:38.212 sys 0m6.167s 00:32:38.212 11:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.212 11:44:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.212 ************************************ 00:32:38.212 END TEST kernel_target_abort 00:32:38.212 ************************************ 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:38.212 rmmod nvme_tcp 00:32:38.212 rmmod nvme_fabrics 00:32:38.212 rmmod nvme_keyring 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:38.212 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3774596 ']' 00:32:38.213 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3774596 00:32:38.213 11:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3774596 ']' 00:32:38.213 11:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3774596 00:32:38.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3774596) - No such process 00:32:38.213 11:44:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3774596 is not found' 00:32:38.213 Process with pid 3774596 is not found 00:32:38.213 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:38.213 11:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:41.516 Waiting for block devices as requested 00:32:41.516 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:41.516 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:41.778 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:41.778 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:41.778 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:42.040 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:42.040 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:42.040 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:42.301 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:42.301 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:42.562 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:42.562 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:42.562 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:42.562 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:42.823 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:42.823 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:42.823 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:43.084 11:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.634 11:44:13 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:45.634 00:32:45.634 real 0m51.281s 00:32:45.634 user 1m2.541s 00:32:45.634 sys 0m18.986s 00:32:45.634 11:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:45.634 11:44:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:45.635 ************************************ 00:32:45.635 END TEST nvmf_abort_qd_sizes 00:32:45.635 ************************************ 00:32:45.635 11:44:13 -- common/autotest_common.sh@1142 -- # return 0 00:32:45.635 11:44:13 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:45.635 11:44:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:45.635 11:44:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:45.635 11:44:13 -- common/autotest_common.sh@10 -- # set +x 00:32:45.635 ************************************ 00:32:45.635 START TEST keyring_file 00:32:45.635 ************************************ 00:32:45.635 11:44:13 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:45.635 * Looking for test storage... 00:32:45.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:45.635 11:44:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:45.635 11:44:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.635 11:44:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.635 11:44:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.635 11:44:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.635 11:44:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.635 11:44:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.635 11:44:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.635 11:44:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.635 11:44:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:45.635 11:44:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mWYZwGupsY 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mWYZwGupsY 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mWYZwGupsY 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mWYZwGupsY 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AGOSOhyh2J 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:45.635 11:44:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AGOSOhyh2J 00:32:45.635 11:44:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AGOSOhyh2J 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AGOSOhyh2J 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=3784674 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3784674 00:32:45.635 11:44:14 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:45.635 11:44:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3784674 ']' 00:32:45.635 11:44:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.635 11:44:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:45.635 11:44:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.635 11:44:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:45.635 11:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:45.635 [2024-07-15 11:44:14.183504] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:32:45.635 [2024-07-15 11:44:14.183573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784674 ] 00:32:45.635 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.635 [2024-07-15 11:44:14.249317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.635 [2024-07-15 11:44:14.323489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.578 11:44:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:46.578 11:44:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:46.578 11:44:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:46.578 11:44:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.578 11:44:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:46.578 [2024-07-15 11:44:14.957940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.578 null0 00:32:46.578 [2024-07-15 11:44:14.989988] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:46.578 [2024-07-15 11:44:14.990222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:46.578 [2024-07-15 11:44:14.997999] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.578 11:44:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.578 11:44:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:46.578 [2024-07-15 11:44:15.014041] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:46.578 request: 00:32:46.578 { 00:32:46.578 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.578 "secure_channel": false, 00:32:46.578 "listen_address": { 00:32:46.578 "trtype": "tcp", 00:32:46.578 "traddr": "127.0.0.1", 00:32:46.578 "trsvcid": "4420" 00:32:46.578 }, 00:32:46.578 "method": "nvmf_subsystem_add_listener", 00:32:46.578 "req_id": 1 00:32:46.578 } 00:32:46.578 Got JSON-RPC error response 00:32:46.578 response: 00:32:46.578 { 00:32:46.578 "code": -32602, 00:32:46.578 "message": "Invalid parameters" 00:32:46.578 } 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:46.579 11:44:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=3784800 00:32:46.579 11:44:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3784800 /var/tmp/bperf.sock 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3784800 ']' 00:32:46.579 11:44:15 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:46.579 11:44:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:46.579 [2024-07-15 11:44:15.071722] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:32:46.579 [2024-07-15 11:44:15.071768] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784800 ] 00:32:46.579 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.579 [2024-07-15 11:44:15.146158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.579 [2024-07-15 11:44:15.210388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.154 11:44:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:47.154 11:44:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:47.154 11:44:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:47.154 11:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:47.416 11:44:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AGOSOhyh2J 00:32:47.416 11:44:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AGOSOhyh2J 00:32:47.678 11:44:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:47.678 11:44:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:47.678 11:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.678 11:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:47.678 11:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.678 11:44:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.mWYZwGupsY == \/\t\m\p\/\t\m\p\.\m\W\Y\Z\w\G\u\p\s\Y ]] 00:32:47.678 11:44:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:47.678 11:44:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:47.678 11:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.678 11:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.678 11:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:47.938 11:44:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AGOSOhyh2J == \/\t\m\p\/\t\m\p\.\A\G\O\S\O\h\y\h\2\J ]] 00:32:47.938 11:44:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.938 11:44:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:47.938 11:44:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.938 11:44:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:48.199 11:44:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:48.199 11:44:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:48.199 11:44:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:48.460 [2024-07-15 11:44:16.918822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:48.460 nvme0n1 00:32:48.460 11:44:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:48.460 11:44:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:48.460 11:44:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:48.461 11:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:48.461 11:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.461 11:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:48.721 11:44:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:48.721 11:44:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:48.721 11:44:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:48.721 11:44:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:48.721 11:44:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:48.721 11:44:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.721 11:44:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:48.721 11:44:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:48.721 11:44:17 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:48.721 Running I/O for 1 seconds... 00:32:50.103 00:32:50.103 Latency(us) 00:32:50.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.103 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:50.103 nvme0n1 : 1.02 7630.67 29.81 0.00 0.00 16634.87 4560.21 22937.60 00:32:50.103 =================================================================================================================== 00:32:50.103 Total : 7630.67 29.81 0.00 0.00 16634.87 4560.21 22937.60 00:32:50.103 0 00:32:50.103 11:44:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:50.103 11:44:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:50.103 11:44:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:50.103 11:44:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.103 11:44:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:50.361 11:44:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:50.362 11:44:18 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:50.362 11:44:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:50.362 11:44:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:50.621 [2024-07-15 11:44:19.099554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:50.621 [2024-07-15 11:44:19.100266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108a9d0 (107): Transport endpoint is not connected 00:32:50.621 [2024-07-15 11:44:19.101262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108a9d0 (9): Bad file descriptor 00:32:50.621 [2024-07-15 11:44:19.102264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:50.621 [2024-07-15 11:44:19.102271] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:50.621 [2024-07-15 11:44:19.102277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:50.621 request: 00:32:50.621 { 00:32:50.621 "name": "nvme0", 00:32:50.621 "trtype": "tcp", 00:32:50.621 "traddr": "127.0.0.1", 00:32:50.621 "adrfam": "ipv4", 00:32:50.621 "trsvcid": "4420", 00:32:50.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:50.621 "prchk_reftag": false, 00:32:50.621 "prchk_guard": false, 00:32:50.621 "hdgst": false, 00:32:50.621 "ddgst": false, 00:32:50.621 "psk": "key1", 00:32:50.621 "method": "bdev_nvme_attach_controller", 00:32:50.621 "req_id": 1 00:32:50.621 } 00:32:50.621 Got JSON-RPC error response 00:32:50.621 response: 00:32:50.621 { 00:32:50.621 "code": -5, 00:32:50.621 "message": "Input/output error" 00:32:50.621 } 00:32:50.621 11:44:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:50.621 11:44:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:50.621 11:44:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:50.621 11:44:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:50.621 11:44:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.621 11:44:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:50.621 11:44:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.621 11:44:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:50.880 11:44:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:50.880 11:44:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:50.880 11:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:51.142 11:44:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:51.142 11:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:51.142 11:44:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:51.142 11:44:19 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:51.142 11:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.401 11:44:19 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:51.401 11:44:19 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.mWYZwGupsY 00:32:51.401 11:44:19 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.401 11:44:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:51.401 11:44:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:51.401 [2024-07-15 11:44:20.047033] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mWYZwGupsY': 0100660 00:32:51.401 [2024-07-15 11:44:20.047054] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:51.401 request: 00:32:51.401 { 00:32:51.401 "name": "key0", 00:32:51.401 "path": "/tmp/tmp.mWYZwGupsY", 00:32:51.401 "method": "keyring_file_add_key", 00:32:51.401 "req_id": 1 00:32:51.401 } 00:32:51.401 Got JSON-RPC error response 00:32:51.401 response: 00:32:51.401 { 00:32:51.401 "code": -1, 00:32:51.401 "message": "Operation not permitted" 00:32:51.401 } 00:32:51.401 11:44:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:51.401 11:44:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.401 11:44:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.401 11:44:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.401 11:44:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.mWYZwGupsY 00:32:51.401 11:44:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:51.401 11:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mWYZwGupsY 00:32:51.661 11:44:20 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.mWYZwGupsY 00:32:51.661 11:44:20 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:51.661 11:44:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:51.661 11:44:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:51.661 11:44:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.661 11:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.661 11:44:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:51.946 11:44:20 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:51.946 11:44:20 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.946 11:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:51.946 [2024-07-15 11:44:20.512214] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mWYZwGupsY': No such file or directory 00:32:51.946 [2024-07-15 11:44:20.512230] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:51.946 [2024-07-15 11:44:20.512247] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:51.946 [2024-07-15 11:44:20.512252] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:51.946 [2024-07-15 11:44:20.512257] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:51.946 request: 00:32:51.946 { 00:32:51.946 "name": "nvme0", 00:32:51.946 "trtype": "tcp", 00:32:51.946 "traddr": "127.0.0.1", 00:32:51.946 "adrfam": "ipv4", 00:32:51.946 "trsvcid": "4420", 00:32:51.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:51.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:51.946 "prchk_reftag": false, 00:32:51.946 "prchk_guard": false, 00:32:51.946 "hdgst": false, 00:32:51.946 "ddgst": false, 00:32:51.946 "psk": "key0", 00:32:51.946 "method": "bdev_nvme_attach_controller", 00:32:51.946 "req_id": 1 00:32:51.946 } 00:32:51.946 Got JSON-RPC error response 00:32:51.946 response: 00:32:51.946 { 00:32:51.946 "code": -19, 00:32:51.946 "message": "No such device" 00:32:51.946 } 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.946 11:44:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.946 11:44:20 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:51.946 11:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:52.206 11:44:20 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LkAfv7G5o4 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:52.206 11:44:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:52.206 11:44:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:52.206 11:44:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:52.206 11:44:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:52.206 11:44:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:52.206 11:44:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LkAfv7G5o4 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LkAfv7G5o4 00:32:52.206 11:44:20 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.LkAfv7G5o4 00:32:52.206 11:44:20 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LkAfv7G5o4 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LkAfv7G5o4 00:32:52.206 11:44:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:52.206 11:44:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:52.466 nvme0n1 00:32:52.466 11:44:21 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:52.466 11:44:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:52.466 11:44:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:52.466 11:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.466 11:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.466 11:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:52.727 11:44:21 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:52.727 11:44:21 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:52.727 11:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:52.727 11:44:21 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:52.727 11:44:21 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:52.727 11:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.727 11:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:52.727 11:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.017 11:44:21 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:53.017 11:44:21 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:53.017 11:44:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:53.017 11:44:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.017 11:44:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.017 11:44:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.017 11:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.278 11:44:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:53.278 11:44:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:53.278 11:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.278 11:44:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:53.278 11:44:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.278 11:44:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:53.540 11:44:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:53.540 11:44:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LkAfv7G5o4 00:32:53.540 11:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LkAfv7G5o4 00:32:53.540 11:44:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AGOSOhyh2J 00:32:53.540 11:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AGOSOhyh2J 00:32:53.801 11:44:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:53.801 11:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:54.063 nvme0n1 00:32:54.063 11:44:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:54.063 11:44:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:54.325 11:44:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:54.325 "subsystems": [ 00:32:54.325 { 00:32:54.325 "subsystem": "keyring", 00:32:54.325 "config": [ 00:32:54.325 { 00:32:54.325 "method": "keyring_file_add_key", 00:32:54.325 "params": { 00:32:54.325 "name": "key0", 00:32:54.325 "path": "/tmp/tmp.LkAfv7G5o4" 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "keyring_file_add_key", 00:32:54.325 "params": { 00:32:54.325 "name": "key1", 00:32:54.325 "path": "/tmp/tmp.AGOSOhyh2J" 00:32:54.325 } 00:32:54.325 } 00:32:54.325 ] 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "subsystem": "iobuf", 00:32:54.325 "config": [ 00:32:54.325 { 00:32:54.325 "method": "iobuf_set_options", 00:32:54.325 "params": { 00:32:54.325 "small_pool_count": 8192, 00:32:54.325 "large_pool_count": 1024, 00:32:54.325 "small_bufsize": 8192, 00:32:54.325 "large_bufsize": 135168 00:32:54.325 } 00:32:54.325 } 00:32:54.325 ] 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "subsystem": "sock", 00:32:54.325 "config": [ 00:32:54.325 { 00:32:54.325 "method": "sock_set_default_impl", 00:32:54.325 "params": { 00:32:54.325 "impl_name": "posix" 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "sock_impl_set_options", 00:32:54.325 "params": { 00:32:54.325 "impl_name": "ssl", 00:32:54.325 "recv_buf_size": 4096, 00:32:54.325 "send_buf_size": 4096, 00:32:54.325 "enable_recv_pipe": true, 00:32:54.325 "enable_quickack": false, 00:32:54.325 "enable_placement_id": 0, 00:32:54.325 "enable_zerocopy_send_server": true, 00:32:54.325 "enable_zerocopy_send_client": false, 00:32:54.325 "zerocopy_threshold": 0, 00:32:54.325 "tls_version": 0, 00:32:54.325 "enable_ktls": false 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "sock_impl_set_options", 00:32:54.325 "params": { 00:32:54.325 "impl_name": "posix", 00:32:54.325 "recv_buf_size": 2097152, 00:32:54.325 "send_buf_size": 2097152, 00:32:54.325 "enable_recv_pipe": true, 00:32:54.325 "enable_quickack": false, 00:32:54.325 "enable_placement_id": 0, 00:32:54.325 "enable_zerocopy_send_server": true, 00:32:54.325 "enable_zerocopy_send_client": false, 00:32:54.325 "zerocopy_threshold": 0, 00:32:54.325 "tls_version": 0, 00:32:54.325 "enable_ktls": false 00:32:54.325 } 00:32:54.325 } 00:32:54.325 ] 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "subsystem": "vmd", 00:32:54.325 "config": [] 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "subsystem": "accel", 00:32:54.325 "config": [ 00:32:54.325 { 00:32:54.325 "method": "accel_set_options", 00:32:54.325 "params": { 00:32:54.325 "small_cache_size": 128, 00:32:54.325 "large_cache_size": 16, 00:32:54.325 "task_count": 2048, 00:32:54.325 "sequence_count": 2048, 00:32:54.325 "buf_count": 2048 00:32:54.325 } 00:32:54.325 } 00:32:54.325 ] 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "subsystem": "bdev", 00:32:54.325 "config": [ 00:32:54.325 { 00:32:54.325 "method": "bdev_set_options", 00:32:54.325 "params": { 00:32:54.325 "bdev_io_pool_size": 65535, 00:32:54.325 "bdev_io_cache_size": 256, 00:32:54.325 "bdev_auto_examine": true, 00:32:54.325 "iobuf_small_cache_size": 128, 00:32:54.325 "iobuf_large_cache_size": 16 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "bdev_raid_set_options", 00:32:54.325 "params": { 00:32:54.325 "process_window_size_kb": 1024 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "bdev_iscsi_set_options", 00:32:54.325 "params": { 00:32:54.325 "timeout_sec": 30 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "bdev_nvme_set_options", 00:32:54.325 "params": { 00:32:54.325 "action_on_timeout": "none", 00:32:54.325 "timeout_us": 0, 00:32:54.325 "timeout_admin_us": 0, 00:32:54.325 "keep_alive_timeout_ms": 10000, 00:32:54.325 "arbitration_burst": 0, 00:32:54.325 "low_priority_weight": 0, 00:32:54.325 "medium_priority_weight": 0, 00:32:54.325 "high_priority_weight": 0, 00:32:54.325 "nvme_adminq_poll_period_us": 10000, 00:32:54.325 "nvme_ioq_poll_period_us": 0, 00:32:54.325 "io_queue_requests": 512, 00:32:54.325 "delay_cmd_submit": true, 00:32:54.325 "transport_retry_count": 4, 00:32:54.325 "bdev_retry_count": 3, 00:32:54.325 "transport_ack_timeout": 0, 00:32:54.325 "ctrlr_loss_timeout_sec": 0, 00:32:54.325 "reconnect_delay_sec": 0, 00:32:54.325 "fast_io_fail_timeout_sec": 0, 00:32:54.325 "disable_auto_failback": false, 00:32:54.325 "generate_uuids": false, 00:32:54.325 "transport_tos": 0, 00:32:54.325 "nvme_error_stat": false, 00:32:54.325 "rdma_srq_size": 0, 00:32:54.325 "io_path_stat": false, 00:32:54.325 "allow_accel_sequence": false, 00:32:54.325 "rdma_max_cq_size": 0, 00:32:54.325 "rdma_cm_event_timeout_ms": 0, 00:32:54.325 "dhchap_digests": [ 00:32:54.325 "sha256", 00:32:54.325 "sha384", 00:32:54.325 "sha512" 00:32:54.325 ], 00:32:54.325 "dhchap_dhgroups": [ 00:32:54.325 "null", 00:32:54.325 "ffdhe2048", 00:32:54.325 "ffdhe3072", 00:32:54.325 "ffdhe4096", 00:32:54.325 "ffdhe6144", 00:32:54.325 "ffdhe8192" 00:32:54.325 ] 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "bdev_nvme_attach_controller", 00:32:54.325 "params": { 00:32:54.325 "name": "nvme0", 00:32:54.325 "trtype": "TCP", 00:32:54.325 "adrfam": "IPv4", 00:32:54.325 "traddr": "127.0.0.1", 00:32:54.325 "trsvcid": "4420", 00:32:54.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.325 "prchk_reftag": false, 00:32:54.325 "prchk_guard": false, 00:32:54.325 "ctrlr_loss_timeout_sec": 0, 00:32:54.325 "reconnect_delay_sec": 0, 00:32:54.325 "fast_io_fail_timeout_sec": 0, 00:32:54.325 "psk": "key0", 00:32:54.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.325 "hdgst": false, 00:32:54.325 "ddgst": false 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "bdev_nvme_set_hotplug", 00:32:54.325 "params": { 00:32:54.325 "period_us": 100000, 00:32:54.325 "enable": false 00:32:54.325 } 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "method": "bdev_wait_for_examine" 00:32:54.325 } 00:32:54.325 ] 00:32:54.325 }, 00:32:54.325 { 00:32:54.325 "subsystem": "nbd", 00:32:54.325 "config": [] 00:32:54.325 } 00:32:54.325 ] 00:32:54.325 }' 00:32:54.326 11:44:22 keyring_file -- keyring/file.sh@114 -- # killprocess 3784800 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3784800 ']' 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3784800 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3784800 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3784800' 00:32:54.326 killing process with pid 3784800 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@967 -- # kill 3784800 00:32:54.326 Received shutdown signal, test time was about 1.000000 seconds 00:32:54.326 00:32:54.326 Latency(us) 00:32:54.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.326 =================================================================================================================== 00:32:54.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.326 11:44:22 keyring_file -- common/autotest_common.sh@972 -- # wait 3784800 00:32:54.588 11:44:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=3786481 00:32:54.588 11:44:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3786481 /var/tmp/bperf.sock 00:32:54.588 11:44:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3786481 ']' 00:32:54.588 11:44:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.588 11:44:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:54.588 11:44:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:54.588 11:44:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.588 11:44:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:54.588 11:44:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:54.588 11:44:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:54.588 "subsystems": [ 00:32:54.588 { 00:32:54.588 "subsystem": "keyring", 00:32:54.588 "config": [ 00:32:54.588 { 00:32:54.588 "method": "keyring_file_add_key", 00:32:54.588 "params": { 00:32:54.588 "name": "key0", 00:32:54.588 "path": "/tmp/tmp.LkAfv7G5o4" 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "keyring_file_add_key", 00:32:54.588 "params": { 00:32:54.588 "name": "key1", 00:32:54.588 "path": "/tmp/tmp.AGOSOhyh2J" 00:32:54.588 } 00:32:54.588 } 00:32:54.588 ] 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "subsystem": "iobuf", 00:32:54.588 "config": [ 00:32:54.588 { 00:32:54.588 "method": "iobuf_set_options", 00:32:54.588 "params": { 00:32:54.588 "small_pool_count": 8192, 00:32:54.588 "large_pool_count": 1024, 00:32:54.588 "small_bufsize": 8192, 00:32:54.588 "large_bufsize": 135168 00:32:54.588 } 00:32:54.588 } 00:32:54.588 ] 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "subsystem": "sock", 00:32:54.588 "config": [ 00:32:54.588 { 00:32:54.588 "method": "sock_set_default_impl", 00:32:54.588 "params": { 00:32:54.588 "impl_name": "posix" 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "sock_impl_set_options", 00:32:54.588 "params": { 00:32:54.588 "impl_name": "ssl", 00:32:54.588 "recv_buf_size": 4096, 00:32:54.588 "send_buf_size": 4096, 00:32:54.588 "enable_recv_pipe": true, 00:32:54.588 "enable_quickack": false, 00:32:54.588 "enable_placement_id": 0, 00:32:54.588 "enable_zerocopy_send_server": true, 00:32:54.588 "enable_zerocopy_send_client": false, 00:32:54.588 "zerocopy_threshold": 0, 00:32:54.588 "tls_version": 0, 00:32:54.588 "enable_ktls": false 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "sock_impl_set_options", 00:32:54.588 "params": { 00:32:54.588 "impl_name": "posix", 00:32:54.588 "recv_buf_size": 2097152, 00:32:54.588 "send_buf_size": 2097152, 00:32:54.588 "enable_recv_pipe": true, 00:32:54.588 "enable_quickack": false, 00:32:54.588 "enable_placement_id": 0, 00:32:54.588 "enable_zerocopy_send_server": true, 00:32:54.588 "enable_zerocopy_send_client": false, 00:32:54.588 "zerocopy_threshold": 0, 00:32:54.588 "tls_version": 0, 00:32:54.588 "enable_ktls": false 00:32:54.588 } 00:32:54.588 } 00:32:54.588 ] 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "subsystem": "vmd", 00:32:54.588 "config": [] 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "subsystem": "accel", 00:32:54.588 "config": [ 00:32:54.588 { 00:32:54.588 "method": "accel_set_options", 00:32:54.588 "params": { 00:32:54.588 "small_cache_size": 128, 00:32:54.588 "large_cache_size": 16, 00:32:54.588 "task_count": 2048, 00:32:54.588 "sequence_count": 2048, 00:32:54.588 "buf_count": 2048 00:32:54.588 } 00:32:54.588 } 00:32:54.588 ] 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "subsystem": "bdev", 00:32:54.588 "config": [ 00:32:54.588 { 00:32:54.588 "method": "bdev_set_options", 00:32:54.588 "params": { 00:32:54.588 "bdev_io_pool_size": 65535, 00:32:54.588 "bdev_io_cache_size": 256, 00:32:54.588 "bdev_auto_examine": true, 00:32:54.588 "iobuf_small_cache_size": 128, 00:32:54.588 "iobuf_large_cache_size": 16 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "bdev_raid_set_options", 00:32:54.588 "params": { 00:32:54.588 "process_window_size_kb": 1024 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "bdev_iscsi_set_options", 00:32:54.588 "params": { 00:32:54.588 "timeout_sec": 30 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "bdev_nvme_set_options", 00:32:54.588 "params": { 00:32:54.588 "action_on_timeout": "none", 00:32:54.588 "timeout_us": 0, 00:32:54.588 "timeout_admin_us": 0, 00:32:54.588 "keep_alive_timeout_ms": 10000, 00:32:54.588 "arbitration_burst": 0, 00:32:54.588 "low_priority_weight": 0, 00:32:54.588 "medium_priority_weight": 0, 00:32:54.588 "high_priority_weight": 0, 00:32:54.588 "nvme_adminq_poll_period_us": 10000, 00:32:54.588 "nvme_ioq_poll_period_us": 0, 00:32:54.588 "io_queue_requests": 512, 00:32:54.588 "delay_cmd_submit": true, 00:32:54.588 "transport_retry_count": 4, 00:32:54.588 "bdev_retry_count": 3, 00:32:54.588 "transport_ack_timeout": 0, 00:32:54.588 "ctrlr_loss_timeout_sec": 0, 00:32:54.588 "reconnect_delay_sec": 0, 00:32:54.588 "fast_io_fail_timeout_sec": 0, 00:32:54.588 "disable_auto_failback": false, 00:32:54.588 "generate_uuids": false, 00:32:54.588 "transport_tos": 0, 00:32:54.588 "nvme_error_stat": false, 00:32:54.588 "rdma_srq_size": 0, 00:32:54.588 "io_path_stat": false, 00:32:54.588 "allow_accel_sequence": false, 00:32:54.588 "rdma_max_cq_size": 0, 00:32:54.588 "rdma_cm_event_timeout_ms": 0, 00:32:54.588 "dhchap_digests": [ 00:32:54.588 "sha256", 00:32:54.588 "sha384", 00:32:54.588 "sha512" 00:32:54.588 ], 00:32:54.588 "dhchap_dhgroups": [ 00:32:54.588 "null", 00:32:54.588 "ffdhe2048", 00:32:54.588 "ffdhe3072", 00:32:54.588 "ffdhe4096", 00:32:54.588 "ffdhe6144", 00:32:54.588 "ffdhe8192" 00:32:54.588 ] 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "bdev_nvme_attach_controller", 00:32:54.588 "params": { 00:32:54.588 "name": "nvme0", 00:32:54.588 "trtype": "TCP", 00:32:54.588 "adrfam": "IPv4", 00:32:54.588 "traddr": "127.0.0.1", 00:32:54.588 "trsvcid": "4420", 00:32:54.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.588 "prchk_reftag": false, 00:32:54.588 "prchk_guard": false, 00:32:54.588 "ctrlr_loss_timeout_sec": 0, 00:32:54.588 "reconnect_delay_sec": 0, 00:32:54.588 "fast_io_fail_timeout_sec": 0, 00:32:54.588 "psk": "key0", 00:32:54.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.588 "hdgst": false, 00:32:54.588 "ddgst": false 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "bdev_nvme_set_hotplug", 00:32:54.588 "params": { 00:32:54.588 "period_us": 100000, 00:32:54.588 "enable": false 00:32:54.588 } 00:32:54.588 }, 00:32:54.588 { 00:32:54.588 "method": "bdev_wait_for_examine" 00:32:54.589 } 00:32:54.589 ] 00:32:54.589 }, 00:32:54.589 { 00:32:54.589 "subsystem": "nbd", 00:32:54.589 "config": [] 00:32:54.589 } 00:32:54.589 ] 00:32:54.589 }' 00:32:54.589 [2024-07-15 11:44:23.076468] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:32:54.589 [2024-07-15 11:44:23.076521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786481 ] 00:32:54.589 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.589 [2024-07-15 11:44:23.149301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.589 [2024-07-15 11:44:23.202462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.849 [2024-07-15 11:44:23.343722] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:55.420 11:44:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:55.420 11:44:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:55.420 11:44:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:55.420 11:44:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:55.420 11:44:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.420 11:44:23 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:55.420 11:44:23 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:55.420 11:44:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.420 11:44:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.420 11:44:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.420 11:44:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.420 11:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.680 11:44:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:55.680 11:44:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:55.680 11:44:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:55.680 11:44:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.680 11:44:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.680 11:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.680 11:44:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:55.680 11:44:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:55.680 11:44:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:55.680 11:44:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:55.680 11:44:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:55.944 11:44:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:55.944 11:44:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:55.944 11:44:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LkAfv7G5o4 /tmp/tmp.AGOSOhyh2J 00:32:55.944 11:44:24 keyring_file -- keyring/file.sh@20 -- # killprocess 3786481 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3786481 ']' 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3786481 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3786481 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3786481' 00:32:55.944 killing process with pid 3786481 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@967 -- # kill 3786481 00:32:55.944 Received shutdown signal, test time was about 1.000000 seconds 00:32:55.944 00:32:55.944 Latency(us) 00:32:55.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.944 =================================================================================================================== 00:32:55.944 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@972 -- # wait 3786481 00:32:55.944 11:44:24 keyring_file -- keyring/file.sh@21 -- # killprocess 3784674 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3784674 ']' 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3784674 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:55.944 11:44:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3784674 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3784674' 00:32:56.263 killing process with pid 3784674 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@967 -- # kill 3784674 00:32:56.263 [2024-07-15 11:44:24.693386] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@972 -- # wait 3784674 00:32:56.263 00:32:56.263 real 0m11.030s 00:32:56.263 user 0m25.786s 00:32:56.263 sys 0m2.603s 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:56.263 11:44:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:56.263 ************************************ 00:32:56.263 END TEST keyring_file 00:32:56.263 ************************************ 00:32:56.263 11:44:24 -- common/autotest_common.sh@1142 -- # return 0 00:32:56.263 11:44:24 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:56.263 11:44:24 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:56.263 11:44:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:56.263 11:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.263 11:44:24 -- common/autotest_common.sh@10 -- # set +x 00:32:56.525 ************************************ 00:32:56.525 START TEST keyring_linux 00:32:56.525 ************************************ 00:32:56.525 11:44:24 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:56.525 * Looking for test storage... 00:32:56.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.525 11:44:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.525 11:44:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.525 11:44:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.525 11:44:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.525 11:44:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.525 11:44:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.525 11:44:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:56.525 11:44:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:56.525 11:44:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:56.525 11:44:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:56.525 11:44:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:56.526 /tmp/:spdk-test:key0 00:32:56.526 11:44:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:56.526 11:44:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:56.526 11:44:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:56.526 /tmp/:spdk-test:key1 00:32:56.526 11:44:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3787044 00:32:56.526 11:44:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3787044 00:32:56.526 11:44:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:56.526 11:44:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3787044 ']' 00:32:56.526 11:44:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.526 11:44:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.526 11:44:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.526 11:44:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.526 11:44:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:56.787 [2024-07-15 11:44:25.262661] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:32:56.787 [2024-07-15 11:44:25.262715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787044 ] 00:32:56.787 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.787 [2024-07-15 11:44:25.321872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.787 [2024-07-15 11:44:25.386304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.357 11:44:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.358 11:44:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:57.358 11:44:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:57.358 11:44:26 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.358 11:44:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:57.358 [2024-07-15 11:44:26.036118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.358 null0 00:32:57.619 [2024-07-15 11:44:26.068166] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:57.619 [2024-07-15 11:44:26.068565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.619 11:44:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:57.619 832452676 00:32:57.619 11:44:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:57.619 836832961 00:32:57.619 11:44:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3787064 00:32:57.619 11:44:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3787064 /var/tmp/bperf.sock 00:32:57.619 11:44:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3787064 ']' 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.619 11:44:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:57.619 [2024-07-15 11:44:26.143449] Starting SPDK v24.09-pre git sha1 3b4b1d00c / DPDK 24.03.0 initialization... 00:32:57.619 [2024-07-15 11:44:26.143495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787064 ] 00:32:57.619 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.619 [2024-07-15 11:44:26.218975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.619 [2024-07-15 11:44:26.272505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.563 11:44:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.563 11:44:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:58.563 11:44:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:58.563 11:44:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:58.563 11:44:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:58.563 11:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:58.563 11:44:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:58.563 11:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:58.824 [2024-07-15 11:44:27.387043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:58.824 nvme0n1 00:32:58.824 11:44:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:58.824 11:44:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:58.824 11:44:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:58.824 11:44:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:58.824 11:44:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:58.824 11:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.084 11:44:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:59.084 11:44:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:59.084 11:44:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:59.084 11:44:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:59.084 11:44:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.084 11:44:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.084 11:44:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@25 -- # sn=832452676 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 832452676 == \8\3\2\4\5\2\6\7\6 ]] 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 832452676 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:59.345 11:44:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:59.345 Running I/O for 1 seconds... 00:33:00.287 00:33:00.287 Latency(us) 00:33:00.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.287 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:00.287 nvme0n1 : 1.02 7934.86 31.00 0.00 0.00 15996.67 4041.39 17694.72 00:33:00.287 =================================================================================================================== 00:33:00.287 Total : 7934.86 31.00 0.00 0.00 15996.67 4041.39 17694.72 00:33:00.287 0 00:33:00.287 11:44:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:00.287 11:44:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:00.547 11:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:00.547 11:44:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:00.547 11:44:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:00.547 11:44:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:00.807 [2024-07-15 11:44:29.391331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:00.807 [2024-07-15 11:44:29.392059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156a950 (107): Transport endpoint is not connected 00:33:00.807 [2024-07-15 11:44:29.393056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156a950 (9): Bad file descriptor 00:33:00.807 [2024-07-15 11:44:29.394058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:00.807 [2024-07-15 11:44:29.394067] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:00.807 [2024-07-15 11:44:29.394072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:00.807 request: 00:33:00.807 { 00:33:00.807 "name": "nvme0", 00:33:00.807 "trtype": "tcp", 00:33:00.807 "traddr": "127.0.0.1", 00:33:00.807 "adrfam": "ipv4", 00:33:00.808 "trsvcid": "4420", 00:33:00.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:00.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:00.808 "prchk_reftag": false, 00:33:00.808 "prchk_guard": false, 00:33:00.808 "hdgst": false, 00:33:00.808 "ddgst": false, 00:33:00.808 "psk": ":spdk-test:key1", 00:33:00.808 "method": "bdev_nvme_attach_controller", 00:33:00.808 "req_id": 1 00:33:00.808 } 00:33:00.808 Got JSON-RPC error response 00:33:00.808 response: 00:33:00.808 { 00:33:00.808 "code": -5, 00:33:00.808 "message": "Input/output error" 00:33:00.808 } 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@33 -- # sn=832452676 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 832452676 00:33:00.808 1 links removed 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@33 -- # sn=836832961 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 836832961 00:33:00.808 1 links removed 00:33:00.808 11:44:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3787064 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3787064 ']' 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3787064 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3787064 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3787064' 00:33:00.808 killing process with pid 3787064 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 3787064 00:33:00.808 Received shutdown signal, test time was about 1.000000 seconds 00:33:00.808 00:33:00.808 Latency(us) 00:33:00.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.808 =================================================================================================================== 00:33:00.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.808 11:44:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 3787064 00:33:01.068 11:44:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3787044 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3787044 ']' 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3787044 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3787044 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3787044' 00:33:01.068 killing process with pid 3787044 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 3787044 00:33:01.068 11:44:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 3787044 00:33:01.331 00:33:01.331 real 0m4.880s 00:33:01.331 user 0m8.294s 00:33:01.331 sys 0m1.375s 00:33:01.331 11:44:29 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:01.331 11:44:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:01.331 ************************************ 00:33:01.331 END TEST keyring_linux 00:33:01.331 ************************************ 00:33:01.331 11:44:29 -- common/autotest_common.sh@1142 -- # return 0 00:33:01.331 11:44:29 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:01.331 11:44:29 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:01.331 11:44:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:01.331 11:44:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:01.331 11:44:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:01.331 11:44:29 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:01.331 11:44:29 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:01.331 11:44:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:01.331 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:33:01.331 11:44:29 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:01.331 11:44:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:01.331 11:44:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:01.331 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:33:09.492 INFO: APP EXITING 00:33:09.492 INFO: killing all VMs 00:33:09.492 INFO: killing vhost app 00:33:09.492 WARN: no vhost pid file found 00:33:09.492 INFO: EXIT DONE 00:33:12.041 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:12.041 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:12.041 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:12.301 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:12.301 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:12.301 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:12.301 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:16.504 Cleaning 00:33:16.504 Removing: /var/run/dpdk/spdk0/config 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:16.504 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:16.504 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:16.504 Removing: /var/run/dpdk/spdk1/config 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:16.504 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:16.504 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:16.504 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:16.504 Removing: /var/run/dpdk/spdk2/config 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:16.504 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:16.504 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:16.504 Removing: /var/run/dpdk/spdk3/config 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:16.504 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:16.505 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:16.505 Removing: /var/run/dpdk/spdk4/config 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:16.505 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:16.505 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:16.505 Removing: /dev/shm/bdev_svc_trace.1 00:33:16.505 Removing: /dev/shm/nvmf_trace.0 00:33:16.505 Removing: /dev/shm/spdk_tgt_trace.pid3330901 00:33:16.505 Removing: /var/run/dpdk/spdk0 00:33:16.505 Removing: /var/run/dpdk/spdk1 00:33:16.505 Removing: /var/run/dpdk/spdk2 00:33:16.505 Removing: /var/run/dpdk/spdk3 00:33:16.505 Removing: /var/run/dpdk/spdk4 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3329301 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3330901 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3331423 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3332568 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3332807 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3334018 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3334199 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3334514 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3335455 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3336227 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3336606 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3336882 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3337172 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3337472 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3337831 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3338181 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3338500 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3339533 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3342931 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3343116 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3343648 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3344061 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3344552 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3344737 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3345258 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3345293 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3345637 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3345966 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3346017 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3346346 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3346786 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3347135 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3347414 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3347599 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3347786 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3347984 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3348339 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3348537 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3348733 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3349078 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3349427 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3349776 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3349992 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3350182 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3350522 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3350872 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3351219 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3351463 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3351658 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3351961 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3352314 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3352663 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3352946 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3353163 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3353408 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3353756 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3353881 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3354233 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3358683 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3411497 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3416657 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3428366 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3434760 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3439612 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3440293 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3447486 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3455263 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3455287 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3456335 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3457391 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3458452 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3459086 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3459225 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3459446 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3459623 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3459626 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3460631 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3461635 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3462649 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3463321 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3463344 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3463661 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3465085 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3466293 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3476484 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3476840 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3481856 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3488605 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3491683 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3504398 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3515068 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3517087 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3518094 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3538365 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3542906 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3574959 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3580124 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3582124 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3584229 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3584487 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3584824 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3584950 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3585563 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3587898 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3588973 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3589381 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3592632 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3593339 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3594102 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3599095 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3611014 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3615842 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3623051 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3624553 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3626382 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3631461 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3636271 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3645344 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3645352 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3650851 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3651185 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3651304 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3651857 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3651864 00:33:16.505 Removing: /var/run/dpdk/spdk_pid3657231 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3658046 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3663229 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3666435 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3672951 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3679159 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3689076 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3697377 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3697388 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3720424 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3721113 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3721912 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3722736 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3723715 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3724477 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3725225 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3725913 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3730959 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3731290 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3738324 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3738598 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3741209 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3748751 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3748756 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3755079 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3757296 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3759780 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3761025 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3763494 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3764903 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3774814 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3775319 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3775968 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3778903 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3779556 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3780028 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3784674 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3784800 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3786481 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3787044 00:33:16.767 Removing: /var/run/dpdk/spdk_pid3787064 00:33:16.767 Clean 00:33:16.767 11:44:45 -- common/autotest_common.sh@1451 -- # return 0 00:33:16.767 11:44:45 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:16.767 11:44:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:16.767 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:33:17.028 11:44:45 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:17.028 11:44:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:17.028 11:44:45 -- common/autotest_common.sh@10 -- # set +x 00:33:17.028 11:44:45 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:17.028 11:44:45 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:17.028 11:44:45 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:17.028 11:44:45 -- spdk/autotest.sh@391 -- # hash lcov 00:33:17.028 11:44:45 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:17.028 11:44:45 -- spdk/autotest.sh@393 -- # hostname 00:33:17.028 11:44:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:17.028 geninfo: WARNING: invalid characters removed from testname! 00:33:43.652 11:45:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:43.940 11:45:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:46.486 11:45:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:47.869 11:45:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:49.781 11:45:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:51.165 11:45:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:52.549 11:45:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:52.811 11:45:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.811 11:45:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:52.811 11:45:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.811 11:45:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.811 11:45:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.811 11:45:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.811 11:45:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.811 11:45:21 -- paths/export.sh@5 -- $ export PATH 00:33:52.811 11:45:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.811 11:45:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:52.811 11:45:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:52.811 11:45:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721036721.XXXXXX 00:33:52.811 11:45:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721036721.8Q2aAQ 00:33:52.811 11:45:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:52.811 11:45:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:52.811 11:45:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:52.811 11:45:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:52.811 11:45:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:52.811 11:45:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:52.811 11:45:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:52.811 11:45:21 -- common/autotest_common.sh@10 -- $ set +x 00:33:52.811 11:45:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:52.811 11:45:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:52.811 11:45:21 -- pm/common@17 -- $ local monitor 00:33:52.811 11:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:52.811 11:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:52.811 11:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:52.811 11:45:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:52.811 11:45:21 -- pm/common@21 -- $ date +%s 00:33:52.811 11:45:21 -- pm/common@25 -- $ sleep 1 00:33:52.811 11:45:21 -- pm/common@21 -- $ date +%s 00:33:52.811 11:45:21 -- pm/common@21 -- $ date +%s 00:33:52.811 11:45:21 -- pm/common@21 -- $ date +%s 00:33:52.811 11:45:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036721 00:33:52.811 11:45:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036721 00:33:52.811 11:45:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036721 00:33:52.811 11:45:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721036721 00:33:52.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036721_collect-vmstat.pm.log 00:33:52.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036721_collect-cpu-load.pm.log 00:33:52.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036721_collect-cpu-temp.pm.log 00:33:52.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721036721_collect-bmc-pm.bmc.pm.log 00:33:53.751 11:45:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:53.751 11:45:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:53.751 11:45:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:53.751 11:45:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:53.751 11:45:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:53.751 11:45:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:53.751 11:45:22 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:53.751 11:45:22 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:53.751 11:45:22 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:53.751 11:45:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:53.751 11:45:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:53.751 11:45:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:53.751 11:45:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:53.751 11:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.751 11:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:53.751 11:45:22 -- pm/common@44 -- $ pid=3800100 00:33:53.751 11:45:22 -- pm/common@50 -- $ kill -TERM 3800100 00:33:53.751 11:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.751 11:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:53.751 11:45:22 -- pm/common@44 -- $ pid=3800101 00:33:53.751 11:45:22 -- pm/common@50 -- $ kill -TERM 3800101 00:33:53.751 11:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.751 11:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:53.751 11:45:22 -- pm/common@44 -- $ pid=3800103 00:33:53.751 11:45:22 -- pm/common@50 -- $ kill -TERM 3800103 00:33:53.751 11:45:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.751 11:45:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:53.751 11:45:22 -- pm/common@44 -- $ pid=3800126 00:33:53.751 11:45:22 -- pm/common@50 -- $ sudo -E kill -TERM 3800126 00:33:53.751 + [[ -n 3209170 ]] 00:33:53.751 + sudo kill 3209170 00:33:53.820 [Pipeline] } 00:33:53.837 [Pipeline] // stage 00:33:53.842 [Pipeline] } 00:33:53.854 [Pipeline] // timeout 00:33:53.859 [Pipeline] } 00:33:53.878 [Pipeline] // catchError 00:33:53.884 [Pipeline] } 00:33:53.904 [Pipeline] // wrap 00:33:53.911 [Pipeline] } 00:33:53.921 [Pipeline] // catchError 00:33:53.929 [Pipeline] stage 00:33:53.930 [Pipeline] { (Epilogue) 00:33:53.943 [Pipeline] catchError 00:33:53.945 [Pipeline] { 00:33:53.959 [Pipeline] echo 00:33:53.960 Cleanup processes 00:33:53.964 [Pipeline] sh 00:33:54.250 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:54.250 3800207 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:54.250 3800649 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:54.263 [Pipeline] sh 00:33:54.547 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:54.547 ++ grep -v 'sudo pgrep' 00:33:54.547 ++ awk '{print $1}' 00:33:54.547 + sudo kill -9 3800207 00:33:54.560 [Pipeline] sh 00:33:54.847 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:07.089 [Pipeline] sh 00:34:07.414 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:07.414 Artifacts sizes are good 00:34:07.431 [Pipeline] archiveArtifacts 00:34:07.440 Archiving artifacts 00:34:07.636 [Pipeline] sh 00:34:07.923 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:07.940 [Pipeline] cleanWs 00:34:07.953 [WS-CLEANUP] Deleting project workspace... 00:34:07.953 [WS-CLEANUP] Deferred wipeout is used... 00:34:07.960 [WS-CLEANUP] done 00:34:07.963 [Pipeline] } 00:34:07.983 [Pipeline] // catchError 00:34:07.996 [Pipeline] sh 00:34:08.285 + logger -p user.info -t JENKINS-CI 00:34:08.296 [Pipeline] } 00:34:08.313 [Pipeline] // stage 00:34:08.320 [Pipeline] } 00:34:08.343 [Pipeline] // node 00:34:08.348 [Pipeline] End of Pipeline 00:34:08.399 Finished: SUCCESS